00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 591 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3256 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.101 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.147 Fetching changes from the remote Git repository 00:00:00.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.184 Using shallow fetch with depth 1 00:00:00.184 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.184 > git --version # timeout=10 00:00:00.209 > git --version # 'git version 2.39.2' 00:00:00.209 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.225 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.225 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.248 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.258 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.269 Checking out Revision b5b39330b9cd2d07557c7dcc26f96d9d7350a4a6 (FETCH_HEAD) 00:00:07.269 > git config core.sparsecheckout # timeout=10 00:00:07.279 > git read-tree -mu HEAD # timeout=10 00:00:07.295 > git checkout -f b5b39330b9cd2d07557c7dcc26f96d9d7350a4a6 # timeout=5 00:00:07.313 Commit message: "glpi/assets/logger: set currently processing object name" 00:00:07.313 > git rev-list --no-walk b5b39330b9cd2d07557c7dcc26f96d9d7350a4a6 # timeout=10 00:00:07.422 [Pipeline] Start of Pipeline 00:00:07.435 [Pipeline] library 00:00:07.436 Loading library shm_lib@master 00:00:07.437 Library shm_lib@master is cached. Copying from home. 00:00:07.450 [Pipeline] node 00:00:07.456 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.458 [Pipeline] { 00:00:07.468 [Pipeline] catchError 00:00:07.470 [Pipeline] { 00:00:07.480 [Pipeline] wrap 00:00:07.488 [Pipeline] { 00:00:07.494 [Pipeline] stage 00:00:07.496 [Pipeline] { (Prologue) 00:00:07.676 [Pipeline] sh 00:00:07.955 + logger -p user.info -t JENKINS-CI 00:00:07.983 [Pipeline] echo 00:00:07.985 Node: GP11 00:00:07.994 [Pipeline] sh 00:00:08.292 [Pipeline] setCustomBuildProperty 00:00:08.306 [Pipeline] echo 00:00:08.307 Cleanup processes 00:00:08.315 [Pipeline] sh 00:00:08.602 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.603 3247178 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.616 [Pipeline] sh 00:00:08.895 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.895 ++ grep -v 'sudo pgrep' 00:00:08.895 ++ awk '{print $1}' 00:00:08.895 + sudo kill -9 00:00:08.895 + true 00:00:08.909 [Pipeline] cleanWs 00:00:08.918 [WS-CLEANUP] Deleting project workspace... 00:00:08.918 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.925 [WS-CLEANUP] done 00:00:08.930 [Pipeline] setCustomBuildProperty 00:00:08.946 [Pipeline] sh 00:00:09.228 + sudo git config --global --replace-all safe.directory '*' 00:00:09.334 [Pipeline] httpRequest 00:00:09.368 [Pipeline] echo 00:00:09.369 Sorcerer 10.211.164.101 is alive 00:00:09.379 [Pipeline] httpRequest 00:00:09.384 HttpMethod: GET 00:00:09.385 URL: http://10.211.164.101/packages/jbp_b5b39330b9cd2d07557c7dcc26f96d9d7350a4a6.tar.gz 00:00:09.386 Sending request to url: http://10.211.164.101/packages/jbp_b5b39330b9cd2d07557c7dcc26f96d9d7350a4a6.tar.gz 00:00:09.408 Response Code: HTTP/1.1 200 OK 00:00:09.408 Success: Status code 200 is in the accepted range: 200,404 00:00:09.409 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b5b39330b9cd2d07557c7dcc26f96d9d7350a4a6.tar.gz 00:00:23.352 [Pipeline] sh 00:00:23.635 + tar --no-same-owner -xf jbp_b5b39330b9cd2d07557c7dcc26f96d9d7350a4a6.tar.gz 00:00:23.652 [Pipeline] httpRequest 00:00:23.676 [Pipeline] echo 00:00:23.677 Sorcerer 10.211.164.101 is alive 00:00:23.687 [Pipeline] httpRequest 00:00:23.692 HttpMethod: GET 00:00:23.693 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:23.694 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:23.707 Response Code: HTTP/1.1 200 OK 00:00:23.707 Success: Status code 200 is in the accepted range: 200,404 00:00:23.708 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:26.461 [Pipeline] sh 00:01:26.744 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:29.291 [Pipeline] sh 00:01:29.573 + git -C spdk log --oneline -n5 00:01:29.573 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:29.573 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:29.573 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:29.573 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:29.573 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:29.591 [Pipeline] withCredentials 00:01:29.604 > git --version # timeout=10 00:01:29.615 > git --version # 'git version 2.39.2' 00:01:29.634 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:29.636 [Pipeline] { 00:01:29.647 [Pipeline] retry 00:01:29.649 [Pipeline] { 00:01:29.667 [Pipeline] sh 00:01:29.947 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:30.220 [Pipeline] } 00:01:30.244 [Pipeline] // retry 00:01:30.250 [Pipeline] } 00:01:30.271 [Pipeline] // withCredentials 00:01:30.282 [Pipeline] httpRequest 00:01:30.306 [Pipeline] echo 00:01:30.308 Sorcerer 10.211.164.101 is alive 00:01:30.317 [Pipeline] httpRequest 00:01:30.322 HttpMethod: GET 00:01:30.323 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:30.323 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:30.330 Response Code: HTTP/1.1 200 OK 00:01:30.331 Success: Status code 200 is in the accepted range: 200,404 00:01:30.331 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:34.607 [Pipeline] sh 00:01:34.887 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:36.801 [Pipeline] sh 00:01:37.088 + git -C dpdk log --oneline -n5 00:01:37.088 caf0f5d395 version: 22.11.4 00:01:37.088 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:37.088 dc9c799c7d vhost: fix missing spinlock unlock 00:01:37.088 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:37.088 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:37.097 [Pipeline] } 00:01:37.113 [Pipeline] // stage 00:01:37.120 [Pipeline] stage 00:01:37.122 [Pipeline] { (Prepare) 00:01:37.140 [Pipeline] writeFile 00:01:37.155 [Pipeline] sh 00:01:37.434 + logger -p user.info -t JENKINS-CI 00:01:37.447 [Pipeline] sh 00:01:37.731 + logger -p user.info -t JENKINS-CI 00:01:37.745 [Pipeline] sh 00:01:38.100 + cat autorun-spdk.conf 00:01:38.100 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.100 SPDK_TEST_NVMF=1 00:01:38.100 SPDK_TEST_NVME_CLI=1 00:01:38.100 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.100 SPDK_TEST_NVMF_NICS=e810 00:01:38.100 SPDK_TEST_VFIOUSER=1 00:01:38.100 SPDK_RUN_UBSAN=1 00:01:38.100 NET_TYPE=phy 00:01:38.100 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:38.100 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:38.108 RUN_NIGHTLY=1 00:01:38.113 [Pipeline] readFile 00:01:38.140 [Pipeline] withEnv 00:01:38.142 [Pipeline] { 00:01:38.154 [Pipeline] sh 00:01:38.439 + set -ex 00:01:38.439 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:38.439 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:38.439 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.439 ++ SPDK_TEST_NVMF=1 00:01:38.439 ++ SPDK_TEST_NVME_CLI=1 00:01:38.439 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.439 ++ SPDK_TEST_NVMF_NICS=e810 00:01:38.439 ++ SPDK_TEST_VFIOUSER=1 00:01:38.439 ++ SPDK_RUN_UBSAN=1 00:01:38.439 ++ NET_TYPE=phy 00:01:38.439 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:38.439 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:38.439 ++ RUN_NIGHTLY=1 00:01:38.439 + case $SPDK_TEST_NVMF_NICS in 00:01:38.439 + DRIVERS=ice 00:01:38.439 + [[ tcp == \r\d\m\a ]] 00:01:38.439 + [[ -n ice ]] 00:01:38.439 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:38.439 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:38.439 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:38.439 rmmod: ERROR: Module irdma is not currently loaded 00:01:38.439 rmmod: ERROR: Module i40iw is not currently loaded 00:01:38.439 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:38.439 + true 00:01:38.439 + for D in $DRIVERS 00:01:38.439 + sudo modprobe ice 00:01:38.439 + exit 0 00:01:38.451 [Pipeline] } 00:01:38.475 [Pipeline] // withEnv 00:01:38.482 [Pipeline] } 00:01:38.497 [Pipeline] // stage 00:01:38.507 [Pipeline] catchError 00:01:38.509 [Pipeline] { 00:01:38.523 [Pipeline] timeout 00:01:38.524 Timeout set to expire in 50 min 00:01:38.525 [Pipeline] { 00:01:38.542 [Pipeline] stage 00:01:38.544 [Pipeline] { (Tests) 00:01:38.563 [Pipeline] sh 00:01:38.850 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:38.851 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:38.851 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:38.851 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:38.851 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.851 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:38.851 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:38.851 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:38.851 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:38.851 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:38.851 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:38.851 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:38.851 + source /etc/os-release 00:01:38.851 ++ NAME='Fedora Linux' 00:01:38.851 ++ VERSION='38 (Cloud Edition)' 00:01:38.851 ++ ID=fedora 00:01:38.851 ++ VERSION_ID=38 00:01:38.851 ++ VERSION_CODENAME= 00:01:38.851 ++ PLATFORM_ID=platform:f38 00:01:38.851 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:38.851 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.851 ++ LOGO=fedora-logo-icon 00:01:38.851 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:38.851 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.851 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:38.851 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.851 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.851 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.851 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:38.851 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.851 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:38.851 ++ SUPPORT_END=2024-05-14 00:01:38.851 ++ VARIANT='Cloud Edition' 00:01:38.851 ++ VARIANT_ID=cloud 00:01:38.851 + uname -a 00:01:38.851 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:38.851 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:39.786 Hugepages 00:01:39.786 node hugesize free / total 00:01:39.786 node0 1048576kB 0 / 0 00:01:39.786 node0 2048kB 0 / 0 00:01:39.786 node1 1048576kB 0 / 0 00:01:39.786 node1 2048kB 0 / 0 00:01:39.786 00:01:39.786 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:39.786 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:39.786 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:39.786 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:39.786 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:39.786 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:39.786 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:39.786 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:39.786 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:39.786 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:39.786 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:39.786 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:39.786 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:39.786 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:39.786 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:39.786 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:39.786 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:40.046 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:40.046 + rm -f /tmp/spdk-ld-path 00:01:40.046 + source autorun-spdk.conf 00:01:40.046 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.046 ++ SPDK_TEST_NVMF=1 00:01:40.046 ++ SPDK_TEST_NVME_CLI=1 00:01:40.046 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.046 ++ SPDK_TEST_NVMF_NICS=e810 00:01:40.046 ++ SPDK_TEST_VFIOUSER=1 00:01:40.046 ++ SPDK_RUN_UBSAN=1 00:01:40.046 ++ NET_TYPE=phy 00:01:40.046 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:40.046 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.046 ++ RUN_NIGHTLY=1 00:01:40.046 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:40.046 + [[ -n '' ]] 00:01:40.046 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.046 + for M in /var/spdk/build-*-manifest.txt 00:01:40.046 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:40.046 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:40.046 + for M in /var/spdk/build-*-manifest.txt 00:01:40.046 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:40.046 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:40.046 ++ uname 00:01:40.046 + [[ Linux == \L\i\n\u\x ]] 00:01:40.046 + sudo dmesg -T 00:01:40.046 + sudo dmesg --clear 00:01:40.046 + dmesg_pid=3247875 00:01:40.046 + [[ Fedora Linux == FreeBSD ]] 00:01:40.046 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.046 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.046 + sudo dmesg -Tw 00:01:40.046 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:40.046 + [[ -x /usr/src/fio-static/fio ]] 00:01:40.046 + export FIO_BIN=/usr/src/fio-static/fio 00:01:40.046 + FIO_BIN=/usr/src/fio-static/fio 00:01:40.046 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:40.046 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:40.046 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:40.046 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.046 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.046 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:40.046 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.046 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.046 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:40.046 Test configuration: 00:01:40.046 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.046 SPDK_TEST_NVMF=1 00:01:40.046 SPDK_TEST_NVME_CLI=1 00:01:40.046 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.046 SPDK_TEST_NVMF_NICS=e810 00:01:40.046 SPDK_TEST_VFIOUSER=1 00:01:40.046 SPDK_RUN_UBSAN=1 00:01:40.046 NET_TYPE=phy 00:01:40.046 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:40.046 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.046 RUN_NIGHTLY=1 10:31:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:40.046 10:31:56 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:40.046 10:31:56 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:40.046 10:31:56 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:40.046 10:31:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.046 10:31:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.046 10:31:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.046 10:31:56 -- paths/export.sh@5 -- $ export PATH 00:01:40.046 10:31:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.046 10:31:56 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:40.046 10:31:56 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:40.046 10:31:56 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720600316.XXXXXX 00:01:40.047 10:31:56 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720600316.RuVJm7 00:01:40.047 10:31:56 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:40.047 10:31:56 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:01:40.047 10:31:56 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.047 10:31:56 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:40.047 10:31:56 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:40.047 10:31:56 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:40.047 10:31:56 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:40.047 10:31:56 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:40.047 10:31:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.047 10:31:56 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:40.047 10:31:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:40.047 10:31:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:40.047 10:31:56 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.047 10:31:56 -- spdk/autobuild.sh@16 -- $ date -u 00:01:40.047 Wed Jul 10 08:31:56 AM UTC 2024 00:01:40.047 10:31:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:40.047 LTS-59-g4b94202c6 00:01:40.047 10:31:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:40.047 10:31:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:40.047 10:31:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:40.047 10:31:56 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:40.047 10:31:56 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:40.047 10:31:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.047 ************************************ 00:01:40.047 START TEST ubsan 00:01:40.047 ************************************ 00:01:40.047 10:31:56 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:40.047 using ubsan 00:01:40.047 00:01:40.047 real 0m0.000s 00:01:40.047 user 0m0.000s 00:01:40.047 sys 0m0.000s 00:01:40.047 10:31:56 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:40.047 10:31:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.047 ************************************ 00:01:40.047 END TEST ubsan 00:01:40.047 ************************************ 00:01:40.047 10:31:56 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:40.047 10:31:56 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:40.047 10:31:56 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:40.047 10:31:56 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:40.047 10:31:56 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:40.047 10:31:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.047 ************************************ 00:01:40.047 START TEST build_native_dpdk 00:01:40.047 ************************************ 00:01:40.047 10:31:56 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:40.047 10:31:56 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:40.047 10:31:56 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:40.047 10:31:56 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:40.047 10:31:56 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:40.047 10:31:56 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:40.047 10:31:56 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:40.047 10:31:56 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:40.047 10:31:56 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:40.047 10:31:56 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:40.047 10:31:56 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:40.047 10:31:56 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:40.047 10:31:56 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:40.047 10:31:56 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:40.047 10:31:56 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:40.047 10:31:56 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.047 10:31:56 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.047 10:31:56 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:40.047 10:31:56 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:40.047 10:31:56 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.047 10:31:56 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:40.047 caf0f5d395 version: 22.11.4 00:01:40.047 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:40.047 dc9c799c7d vhost: fix missing spinlock unlock 00:01:40.047 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:40.047 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:40.047 10:31:56 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:40.047 10:31:56 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:40.047 10:31:56 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:40.047 10:31:56 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:40.047 10:31:56 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:40.047 10:31:56 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:40.047 10:31:56 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:40.047 10:31:56 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:40.047 10:31:56 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:40.047 10:31:56 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:40.047 10:31:56 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:40.047 10:31:56 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:40.047 10:31:56 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:40.047 10:31:56 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:40.047 10:31:56 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:40.047 10:31:56 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:40.047 10:31:56 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:40.047 10:31:56 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:40.047 10:31:56 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:40.047 10:31:56 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:40.047 10:31:56 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:40.047 10:31:56 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:40.047 10:31:56 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:40.047 10:31:56 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:40.047 10:31:56 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:40.047 10:31:56 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:40.047 10:31:56 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:40.047 10:31:56 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:40.047 10:31:56 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:40.047 10:31:56 -- scripts/common.sh@343 -- $ case "$op" in 00:01:40.047 10:31:56 -- scripts/common.sh@344 -- $ : 1 00:01:40.047 10:31:56 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:40.047 10:31:56 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.047 10:31:56 -- scripts/common.sh@364 -- $ decimal 22 00:01:40.047 10:31:56 -- scripts/common.sh@352 -- $ local d=22 00:01:40.047 10:31:56 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:40.047 10:31:56 -- scripts/common.sh@354 -- $ echo 22 00:01:40.047 10:31:56 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:40.047 10:31:56 -- scripts/common.sh@365 -- $ decimal 21 00:01:40.047 10:31:56 -- scripts/common.sh@352 -- $ local d=21 00:01:40.047 10:31:56 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:40.047 10:31:56 -- scripts/common.sh@354 -- $ echo 21 00:01:40.047 10:31:56 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:40.047 10:31:56 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:40.047 10:31:56 -- scripts/common.sh@366 -- $ return 1 00:01:40.047 10:31:56 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:40.047 patching file config/rte_config.h 00:01:40.047 Hunk #1 succeeded at 60 (offset 1 line). 00:01:40.047 10:31:56 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:40.047 10:31:56 -- common/autobuild_common.sh@178 -- $ uname -s 00:01:40.047 10:31:56 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:40.047 10:31:56 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:40.047 10:31:56 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:44.247 The Meson build system 00:01:44.247 Version: 1.3.1 00:01:44.247 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:44.247 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:44.247 Build type: native build 00:01:44.247 Program cat found: YES (/usr/bin/cat) 00:01:44.247 Project name: DPDK 00:01:44.247 Project version: 22.11.4 00:01:44.248 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:44.248 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:44.248 Host machine cpu family: x86_64 00:01:44.248 Host machine cpu: x86_64 00:01:44.248 Message: ## Building in Developer Mode ## 00:01:44.248 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:44.248 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:44.248 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:44.248 Program objdump found: YES (/usr/bin/objdump) 00:01:44.248 Program python3 found: YES (/usr/bin/python3) 00:01:44.248 Program cat found: YES (/usr/bin/cat) 00:01:44.248 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:44.248 Checking for size of "void *" : 8 00:01:44.248 Checking for size of "void *" : 8 (cached) 00:01:44.248 Library m found: YES 00:01:44.248 Library numa found: YES 00:01:44.248 Has header "numaif.h" : YES 00:01:44.248 Library fdt found: NO 00:01:44.248 Library execinfo found: NO 00:01:44.248 Has header "execinfo.h" : YES 00:01:44.248 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:44.248 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:44.248 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:44.248 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:44.248 Run-time dependency openssl found: YES 3.0.9 00:01:44.248 Run-time dependency libpcap found: YES 1.10.4 00:01:44.248 Has header "pcap.h" with dependency libpcap: YES 00:01:44.248 Compiler for C supports arguments -Wcast-qual: YES 00:01:44.248 Compiler for C supports arguments -Wdeprecated: YES 00:01:44.248 Compiler for C supports arguments -Wformat: YES 00:01:44.248 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:44.248 Compiler for C supports arguments -Wformat-security: NO 00:01:44.248 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.248 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:44.248 Compiler for C supports arguments -Wnested-externs: YES 00:01:44.248 Compiler for C supports arguments -Wold-style-definition: YES 00:01:44.248 Compiler for C supports arguments -Wpointer-arith: YES 00:01:44.248 Compiler for C supports arguments -Wsign-compare: YES 00:01:44.248 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:44.248 Compiler for C supports arguments -Wundef: YES 00:01:44.248 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.248 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:44.248 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:44.248 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.248 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:44.248 Compiler for C supports arguments -mavx512f: YES 00:01:44.248 Checking if "AVX512 checking" compiles: YES 00:01:44.248 Fetching value of define "__SSE4_2__" : 1 00:01:44.248 Fetching value of define "__AES__" : 1 00:01:44.248 Fetching value of define "__AVX__" : 1 00:01:44.248 Fetching value of define "__AVX2__" : (undefined) 00:01:44.248 Fetching value of define "__AVX512BW__" : (undefined) 00:01:44.248 Fetching value of define "__AVX512CD__" : (undefined) 00:01:44.248 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:44.248 Fetching value of define "__AVX512F__" : (undefined) 00:01:44.248 Fetching value of define "__AVX512VL__" : (undefined) 00:01:44.248 Fetching value of define "__PCLMUL__" : 1 00:01:44.248 Fetching value of define "__RDRND__" : 1 00:01:44.248 Fetching value of define "__RDSEED__" : (undefined) 00:01:44.248 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:44.248 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:44.248 Message: lib/kvargs: Defining dependency "kvargs" 00:01:44.248 Message: lib/telemetry: Defining dependency "telemetry" 00:01:44.248 Checking for function "getentropy" : YES 00:01:44.248 Message: lib/eal: Defining dependency "eal" 00:01:44.248 Message: lib/ring: Defining dependency "ring" 00:01:44.248 Message: lib/rcu: Defining dependency "rcu" 00:01:44.248 Message: lib/mempool: Defining dependency "mempool" 00:01:44.248 Message: lib/mbuf: Defining dependency "mbuf" 00:01:44.248 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:44.248 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:44.248 Compiler for C supports arguments -mpclmul: YES 00:01:44.248 Compiler for C supports arguments -maes: YES 00:01:44.248 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.248 Compiler for C supports arguments -mavx512bw: YES 00:01:44.248 Compiler for C supports arguments -mavx512dq: YES 00:01:44.248 Compiler for C supports arguments -mavx512vl: YES 00:01:44.248 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:44.248 Compiler for C supports arguments -mavx2: YES 00:01:44.248 Compiler for C supports arguments -mavx: YES 00:01:44.248 Message: lib/net: Defining dependency "net" 00:01:44.248 Message: lib/meter: Defining dependency "meter" 00:01:44.248 Message: lib/ethdev: Defining dependency "ethdev" 00:01:44.248 Message: lib/pci: Defining dependency "pci" 00:01:44.248 Message: lib/cmdline: Defining dependency "cmdline" 00:01:44.248 Message: lib/metrics: Defining dependency "metrics" 00:01:44.248 Message: lib/hash: Defining dependency "hash" 00:01:44.248 Message: lib/timer: Defining dependency "timer" 00:01:44.248 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:44.248 Compiler for C supports arguments -mavx2: YES (cached) 00:01:44.248 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:44.248 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:44.248 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:44.248 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:44.248 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:44.248 Message: lib/acl: Defining dependency "acl" 00:01:44.248 Message: lib/bbdev: Defining dependency "bbdev" 00:01:44.248 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:44.248 Run-time dependency libelf found: YES 0.190 00:01:44.248 Message: lib/bpf: Defining dependency "bpf" 00:01:44.248 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:44.248 Message: lib/compressdev: Defining dependency "compressdev" 00:01:44.248 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:44.248 Message: lib/distributor: Defining dependency "distributor" 00:01:44.248 Message: lib/efd: Defining dependency "efd" 00:01:44.248 Message: lib/eventdev: Defining dependency "eventdev" 00:01:44.248 Message: lib/gpudev: Defining dependency "gpudev" 00:01:44.248 Message: lib/gro: Defining dependency "gro" 00:01:44.248 Message: lib/gso: Defining dependency "gso" 00:01:44.248 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:44.248 Message: lib/jobstats: Defining dependency "jobstats" 00:01:44.248 Message: lib/latencystats: Defining dependency "latencystats" 00:01:44.248 Message: lib/lpm: Defining dependency "lpm" 00:01:44.248 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:44.248 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:44.248 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:44.248 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:44.248 Message: lib/member: Defining dependency "member" 00:01:44.248 Message: lib/pcapng: Defining dependency "pcapng" 00:01:44.248 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:44.248 Message: lib/power: Defining dependency "power" 00:01:44.248 Message: lib/rawdev: Defining dependency "rawdev" 00:01:44.248 Message: lib/regexdev: Defining dependency "regexdev" 00:01:44.248 Message: lib/dmadev: Defining dependency "dmadev" 00:01:44.248 Message: lib/rib: Defining dependency "rib" 00:01:44.248 Message: lib/reorder: Defining dependency "reorder" 00:01:44.248 Message: lib/sched: Defining dependency "sched" 00:01:44.248 Message: lib/security: Defining dependency "security" 00:01:44.248 Message: lib/stack: Defining dependency "stack" 00:01:44.248 Has header "linux/userfaultfd.h" : YES 00:01:44.248 Message: lib/vhost: Defining dependency "vhost" 00:01:44.249 Message: lib/ipsec: Defining dependency "ipsec" 00:01:44.249 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:44.249 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:44.249 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:44.249 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:44.249 Message: lib/fib: Defining dependency "fib" 00:01:44.249 Message: lib/port: Defining dependency "port" 00:01:44.249 Message: lib/pdump: Defining dependency "pdump" 00:01:44.249 Message: lib/table: Defining dependency "table" 00:01:44.249 Message: lib/pipeline: Defining dependency "pipeline" 00:01:44.249 Message: lib/graph: Defining dependency "graph" 00:01:44.249 Message: lib/node: Defining dependency "node" 00:01:44.249 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:44.249 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:44.249 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:44.249 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:44.249 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:44.249 Compiler for C supports arguments -Wno-unused-value: YES 00:01:45.189 Compiler for C supports arguments -Wno-format: YES 00:01:45.189 Compiler for C supports arguments -Wno-format-security: YES 00:01:45.189 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:45.189 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:45.189 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:45.189 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:45.189 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:45.189 Compiler for C supports arguments -mavx2: YES (cached) 00:01:45.189 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:45.189 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.189 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:45.189 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:45.189 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:45.189 Program doxygen found: YES (/usr/bin/doxygen) 00:01:45.189 Configuring doxy-api.conf using configuration 00:01:45.189 Program sphinx-build found: NO 00:01:45.189 Configuring rte_build_config.h using configuration 00:01:45.189 Message: 00:01:45.189 ================= 00:01:45.189 Applications Enabled 00:01:45.189 ================= 00:01:45.189 00:01:45.189 apps: 00:01:45.189 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:45.189 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:45.189 test-security-perf, 00:01:45.189 00:01:45.189 Message: 00:01:45.189 ================= 00:01:45.189 Libraries Enabled 00:01:45.189 ================= 00:01:45.189 00:01:45.189 libs: 00:01:45.189 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:45.189 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:45.189 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:45.189 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:45.189 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:45.189 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:45.189 table, pipeline, graph, node, 00:01:45.189 00:01:45.189 Message: 00:01:45.189 =============== 00:01:45.189 Drivers Enabled 00:01:45.189 =============== 00:01:45.189 00:01:45.189 common: 00:01:45.189 00:01:45.189 bus: 00:01:45.189 pci, vdev, 00:01:45.189 mempool: 00:01:45.189 ring, 00:01:45.189 dma: 00:01:45.189 00:01:45.189 net: 00:01:45.189 i40e, 00:01:45.189 raw: 00:01:45.189 00:01:45.189 crypto: 00:01:45.189 00:01:45.189 compress: 00:01:45.189 00:01:45.189 regex: 00:01:45.189 00:01:45.189 vdpa: 00:01:45.189 00:01:45.189 event: 00:01:45.189 00:01:45.189 baseband: 00:01:45.189 00:01:45.189 gpu: 00:01:45.189 00:01:45.189 00:01:45.189 Message: 00:01:45.189 ================= 00:01:45.189 Content Skipped 00:01:45.189 ================= 00:01:45.189 00:01:45.189 apps: 00:01:45.189 00:01:45.189 libs: 00:01:45.189 kni: explicitly disabled via build config (deprecated lib) 00:01:45.189 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:45.189 00:01:45.189 drivers: 00:01:45.189 common/cpt: not in enabled drivers build config 00:01:45.189 common/dpaax: not in enabled drivers build config 00:01:45.189 common/iavf: not in enabled drivers build config 00:01:45.189 common/idpf: not in enabled drivers build config 00:01:45.189 common/mvep: not in enabled drivers build config 00:01:45.189 common/octeontx: not in enabled drivers build config 00:01:45.189 bus/auxiliary: not in enabled drivers build config 00:01:45.189 bus/dpaa: not in enabled drivers build config 00:01:45.189 bus/fslmc: not in enabled drivers build config 00:01:45.189 bus/ifpga: not in enabled drivers build config 00:01:45.189 bus/vmbus: not in enabled drivers build config 00:01:45.189 common/cnxk: not in enabled drivers build config 00:01:45.189 common/mlx5: not in enabled drivers build config 00:01:45.189 common/qat: not in enabled drivers build config 00:01:45.189 common/sfc_efx: not in enabled drivers build config 00:01:45.189 mempool/bucket: not in enabled drivers build config 00:01:45.189 mempool/cnxk: not in enabled drivers build config 00:01:45.189 mempool/dpaa: not in enabled drivers build config 00:01:45.189 mempool/dpaa2: not in enabled drivers build config 00:01:45.189 mempool/octeontx: not in enabled drivers build config 00:01:45.189 mempool/stack: not in enabled drivers build config 00:01:45.189 dma/cnxk: not in enabled drivers build config 00:01:45.189 dma/dpaa: not in enabled drivers build config 00:01:45.189 dma/dpaa2: not in enabled drivers build config 00:01:45.189 dma/hisilicon: not in enabled drivers build config 00:01:45.189 dma/idxd: not in enabled drivers build config 00:01:45.189 dma/ioat: not in enabled drivers build config 00:01:45.189 dma/skeleton: not in enabled drivers build config 00:01:45.190 net/af_packet: not in enabled drivers build config 00:01:45.190 net/af_xdp: not in enabled drivers build config 00:01:45.190 net/ark: not in enabled drivers build config 00:01:45.190 net/atlantic: not in enabled drivers build config 00:01:45.190 net/avp: not in enabled drivers build config 00:01:45.190 net/axgbe: not in enabled drivers build config 00:01:45.190 net/bnx2x: not in enabled drivers build config 00:01:45.190 net/bnxt: not in enabled drivers build config 00:01:45.190 net/bonding: not in enabled drivers build config 00:01:45.190 net/cnxk: not in enabled drivers build config 00:01:45.190 net/cxgbe: not in enabled drivers build config 00:01:45.190 net/dpaa: not in enabled drivers build config 00:01:45.190 net/dpaa2: not in enabled drivers build config 00:01:45.190 net/e1000: not in enabled drivers build config 00:01:45.190 net/ena: not in enabled drivers build config 00:01:45.190 net/enetc: not in enabled drivers build config 00:01:45.190 net/enetfec: not in enabled drivers build config 00:01:45.190 net/enic: not in enabled drivers build config 00:01:45.190 net/failsafe: not in enabled drivers build config 00:01:45.190 net/fm10k: not in enabled drivers build config 00:01:45.190 net/gve: not in enabled drivers build config 00:01:45.190 net/hinic: not in enabled drivers build config 00:01:45.190 net/hns3: not in enabled drivers build config 00:01:45.190 net/iavf: not in enabled drivers build config 00:01:45.190 net/ice: not in enabled drivers build config 00:01:45.190 net/idpf: not in enabled drivers build config 00:01:45.190 net/igc: not in enabled drivers build config 00:01:45.190 net/ionic: not in enabled drivers build config 00:01:45.190 net/ipn3ke: not in enabled drivers build config 00:01:45.190 net/ixgbe: not in enabled drivers build config 00:01:45.190 net/kni: not in enabled drivers build config 00:01:45.190 net/liquidio: not in enabled drivers build config 00:01:45.190 net/mana: not in enabled drivers build config 00:01:45.190 net/memif: not in enabled drivers build config 00:01:45.190 net/mlx4: not in enabled drivers build config 00:01:45.190 net/mlx5: not in enabled drivers build config 00:01:45.190 net/mvneta: not in enabled drivers build config 00:01:45.190 net/mvpp2: not in enabled drivers build config 00:01:45.190 net/netvsc: not in enabled drivers build config 00:01:45.190 net/nfb: not in enabled drivers build config 00:01:45.190 net/nfp: not in enabled drivers build config 00:01:45.190 net/ngbe: not in enabled drivers build config 00:01:45.190 net/null: not in enabled drivers build config 00:01:45.190 net/octeontx: not in enabled drivers build config 00:01:45.190 net/octeon_ep: not in enabled drivers build config 00:01:45.190 net/pcap: not in enabled drivers build config 00:01:45.190 net/pfe: not in enabled drivers build config 00:01:45.190 net/qede: not in enabled drivers build config 00:01:45.190 net/ring: not in enabled drivers build config 00:01:45.190 net/sfc: not in enabled drivers build config 00:01:45.190 net/softnic: not in enabled drivers build config 00:01:45.190 net/tap: not in enabled drivers build config 00:01:45.190 net/thunderx: not in enabled drivers build config 00:01:45.190 net/txgbe: not in enabled drivers build config 00:01:45.190 net/vdev_netvsc: not in enabled drivers build config 00:01:45.190 net/vhost: not in enabled drivers build config 00:01:45.190 net/virtio: not in enabled drivers build config 00:01:45.190 net/vmxnet3: not in enabled drivers build config 00:01:45.190 raw/cnxk_bphy: not in enabled drivers build config 00:01:45.190 raw/cnxk_gpio: not in enabled drivers build config 00:01:45.190 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:45.190 raw/ifpga: not in enabled drivers build config 00:01:45.190 raw/ntb: not in enabled drivers build config 00:01:45.190 raw/skeleton: not in enabled drivers build config 00:01:45.190 crypto/armv8: not in enabled drivers build config 00:01:45.190 crypto/bcmfs: not in enabled drivers build config 00:01:45.190 crypto/caam_jr: not in enabled drivers build config 00:01:45.190 crypto/ccp: not in enabled drivers build config 00:01:45.190 crypto/cnxk: not in enabled drivers build config 00:01:45.190 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.190 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.190 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.190 crypto/mlx5: not in enabled drivers build config 00:01:45.190 crypto/mvsam: not in enabled drivers build config 00:01:45.190 crypto/nitrox: not in enabled drivers build config 00:01:45.190 crypto/null: not in enabled drivers build config 00:01:45.190 crypto/octeontx: not in enabled drivers build config 00:01:45.190 crypto/openssl: not in enabled drivers build config 00:01:45.190 crypto/scheduler: not in enabled drivers build config 00:01:45.190 crypto/uadk: not in enabled drivers build config 00:01:45.190 crypto/virtio: not in enabled drivers build config 00:01:45.190 compress/isal: not in enabled drivers build config 00:01:45.190 compress/mlx5: not in enabled drivers build config 00:01:45.190 compress/octeontx: not in enabled drivers build config 00:01:45.190 compress/zlib: not in enabled drivers build config 00:01:45.190 regex/mlx5: not in enabled drivers build config 00:01:45.190 regex/cn9k: not in enabled drivers build config 00:01:45.190 vdpa/ifc: not in enabled drivers build config 00:01:45.190 vdpa/mlx5: not in enabled drivers build config 00:01:45.190 vdpa/sfc: not in enabled drivers build config 00:01:45.190 event/cnxk: not in enabled drivers build config 00:01:45.190 event/dlb2: not in enabled drivers build config 00:01:45.190 event/dpaa: not in enabled drivers build config 00:01:45.190 event/dpaa2: not in enabled drivers build config 00:01:45.190 event/dsw: not in enabled drivers build config 00:01:45.190 event/opdl: not in enabled drivers build config 00:01:45.190 event/skeleton: not in enabled drivers build config 00:01:45.190 event/sw: not in enabled drivers build config 00:01:45.190 event/octeontx: not in enabled drivers build config 00:01:45.190 baseband/acc: not in enabled drivers build config 00:01:45.190 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:45.190 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:45.190 baseband/la12xx: not in enabled drivers build config 00:01:45.190 baseband/null: not in enabled drivers build config 00:01:45.190 baseband/turbo_sw: not in enabled drivers build config 00:01:45.190 gpu/cuda: not in enabled drivers build config 00:01:45.190 00:01:45.190 00:01:45.190 Build targets in project: 316 00:01:45.190 00:01:45.190 DPDK 22.11.4 00:01:45.190 00:01:45.190 User defined options 00:01:45.190 libdir : lib 00:01:45.190 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:45.190 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:45.190 c_link_args : 00:01:45.190 enable_docs : false 00:01:45.190 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:45.190 enable_kmods : false 00:01:45.190 machine : native 00:01:45.190 tests : false 00:01:45.190 00:01:45.190 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.190 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:45.457 10:32:02 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:45.457 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:45.457 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:45.457 [2/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:45.457 [3/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:45.457 [4/745] Generating lib/rte_telemetry_def with a custom command 00:01:45.457 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.457 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:45.457 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.457 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.457 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:45.457 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:45.457 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:45.716 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.716 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.716 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:45.716 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:45.716 [16/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:45.716 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.716 [18/745] Linking static target lib/librte_kvargs.a 00:01:45.716 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.716 [20/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:45.716 [21/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:45.716 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:45.716 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:45.716 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:45.716 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:45.716 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:45.716 [27/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:45.716 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:45.716 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:45.716 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:45.716 [31/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:45.716 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:45.716 [33/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:45.716 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.716 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:45.716 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:45.716 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:45.716 [38/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:45.716 [39/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:45.716 [40/745] Generating lib/rte_eal_def with a custom command 00:01:45.716 [41/745] Generating lib/rte_eal_mingw with a custom command 00:01:45.716 [42/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:45.716 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:45.716 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:45.716 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:45.716 [46/745] Generating lib/rte_ring_def with a custom command 00:01:45.716 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:45.716 [48/745] Generating lib/rte_ring_mingw with a custom command 00:01:45.716 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:45.716 [50/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:45.716 [51/745] Generating lib/rte_rcu_mingw with a custom command 00:01:45.716 [52/745] Generating lib/rte_rcu_def with a custom command 00:01:45.716 [53/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:45.716 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:45.982 [55/745] Generating lib/rte_mempool_def with a custom command 00:01:45.982 [56/745] Generating lib/rte_mempool_mingw with a custom command 00:01:45.982 [57/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:45.982 [58/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:45.982 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:45.982 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:45.982 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:45.982 [62/745] Generating lib/rte_mbuf_def with a custom command 00:01:45.982 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:45.982 [64/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:45.982 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:45.982 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:45.982 [67/745] Generating lib/rte_net_def with a custom command 00:01:45.982 [68/745] Generating lib/rte_net_mingw with a custom command 00:01:45.982 [69/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:45.982 [70/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:45.982 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:45.982 [72/745] Generating lib/rte_meter_mingw with a custom command 00:01:45.982 [73/745] Generating lib/rte_meter_def with a custom command 00:01:45.982 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:45.982 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:45.982 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:45.982 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:45.982 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:45.982 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.982 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:45.982 [81/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:45.982 [82/745] Linking static target lib/librte_ring.a 00:01:45.982 [83/745] Linking target lib/librte_kvargs.so.23.0 00:01:45.982 [84/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:46.241 [85/745] Generating lib/rte_pci_def with a custom command 00:01:46.241 [86/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:46.241 [87/745] Linking static target lib/librte_meter.a 00:01:46.241 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.241 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:46.241 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:46.241 [91/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:46.241 [92/745] Linking static target lib/librte_pci.a 00:01:46.241 [93/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.241 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:46.241 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:46.504 [96/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:46.504 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.504 [98/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:46.504 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.504 [100/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.504 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:46.504 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:46.504 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:46.504 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:46.504 [105/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.504 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:46.504 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:46.504 [108/745] Generating lib/rte_cmdline_def with a custom command 00:01:46.504 [109/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:46.504 [110/745] Linking static target lib/librte_telemetry.a 00:01:46.504 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:46.504 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:46.504 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:46.767 [114/745] Generating lib/rte_metrics_def with a custom command 00:01:46.767 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:46.767 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:46.767 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:46.767 [118/745] Generating lib/rte_hash_def with a custom command 00:01:46.767 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:46.767 [120/745] Generating lib/rte_timer_def with a custom command 00:01:46.767 [121/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:46.767 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:46.767 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:46.767 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:47.028 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:47.028 [126/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:47.028 [127/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.028 [128/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.028 [129/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:47.028 [130/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:47.028 [131/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.028 [132/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:47.028 [133/745] Generating lib/rte_acl_def with a custom command 00:01:47.028 [134/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:47.028 [135/745] Generating lib/rte_acl_mingw with a custom command 00:01:47.028 [136/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.028 [137/745] Generating lib/rte_bbdev_def with a custom command 00:01:47.028 [138/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:47.028 [139/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.028 [140/745] Generating lib/rte_bitratestats_def with a custom command 00:01:47.028 [141/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.028 [142/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:47.028 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.287 [144/745] Linking target lib/librte_telemetry.so.23.0 00:01:47.287 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:47.287 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:47.287 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:47.287 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.287 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:47.287 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:47.287 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:01:47.287 [152/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:47.287 [153/745] Generating lib/rte_cfgfile_def with a custom command 00:01:47.287 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:47.287 [155/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:47.287 [156/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:47.287 [157/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:47.287 [158/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:47.549 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:47.549 [160/745] Generating lib/rte_compressdev_def with a custom command 00:01:47.549 [161/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:47.549 [162/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:47.549 [163/745] Generating lib/rte_cryptodev_def with a custom command 00:01:47.549 [164/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:47.549 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:47.549 [166/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:47.549 [167/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:47.549 [168/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:47.549 [169/745] Linking static target lib/librte_timer.a 00:01:47.549 [170/745] Linking static target lib/librte_cmdline.a 00:01:47.549 [171/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:47.549 [172/745] Generating lib/rte_distributor_def with a custom command 00:01:47.549 [173/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:47.549 [174/745] Generating lib/rte_distributor_mingw with a custom command 00:01:47.549 [175/745] Linking static target lib/librte_rcu.a 00:01:47.549 [176/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:47.549 [177/745] Generating lib/rte_efd_def with a custom command 00:01:47.549 [178/745] Generating lib/rte_efd_mingw with a custom command 00:01:47.549 [179/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:47.549 [180/745] Linking static target lib/librte_net.a 00:01:47.812 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:47.812 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:47.812 [183/745] Linking static target lib/librte_mempool.a 00:01:47.812 [184/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:47.812 [185/745] Linking static target lib/librte_metrics.a 00:01:47.812 [186/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:47.812 [187/745] Linking static target lib/librte_cfgfile.a 00:01:48.077 [188/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:48.077 [189/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.077 [190/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.077 [191/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.077 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.077 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.339 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:48.339 [195/745] Generating lib/rte_eventdev_def with a custom command 00:01:48.339 [196/745] Linking static target lib/librte_eal.a 00:01:48.339 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:48.339 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:48.339 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:48.339 [200/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:48.339 [201/745] Generating lib/rte_gpudev_def with a custom command 00:01:48.339 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:48.339 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:48.339 [204/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:48.339 [205/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.339 [206/745] Linking static target lib/librte_bitratestats.a 00:01:48.339 [207/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:48.339 [208/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.339 [209/745] Generating lib/rte_gro_def with a custom command 00:01:48.339 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:48.602 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:48.602 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:48.602 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.602 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.602 [215/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.602 [216/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:48.863 [217/745] Generating lib/rte_gso_def with a custom command 00:01:48.863 [218/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:48.863 [219/745] Generating lib/rte_gso_mingw with a custom command 00:01:48.863 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:48.863 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:48.863 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.863 [223/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.863 [224/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:48.863 [225/745] Linking static target lib/librte_bbdev.a 00:01:48.863 [226/745] Generating lib/rte_ip_frag_def with a custom command 00:01:48.863 [227/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:48.863 [228/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.863 [229/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.126 [230/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:49.126 [231/745] Generating lib/rte_jobstats_def with a custom command 00:01:49.126 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:49.126 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:49.126 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:49.126 [235/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:49.126 [236/745] Generating lib/rte_lpm_def with a custom command 00:01:49.126 [237/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:49.126 [238/745] Linking static target lib/librte_compressdev.a 00:01:49.126 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:01:49.126 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:49.126 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:49.387 [242/745] Linking static target lib/librte_jobstats.a 00:01:49.387 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:49.387 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:49.646 [245/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:49.646 [246/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:49.646 [247/745] Linking static target lib/librte_distributor.a 00:01:49.646 [248/745] Generating lib/rte_member_def with a custom command 00:01:49.646 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:49.646 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:49.646 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:49.646 [252/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.646 [253/745] Generating lib/rte_pcapng_def with a custom command 00:01:49.646 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:49.906 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:49.906 [256/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:49.906 [257/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:49.906 [258/745] Linking static target lib/librte_bpf.a 00:01:49.906 [259/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.906 [260/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:49.906 [261/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:49.906 [262/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:49.906 [263/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:49.906 [264/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:49.906 [265/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.906 [266/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:49.906 [267/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:49.906 [268/745] Generating lib/rte_power_def with a custom command 00:01:49.906 [269/745] Generating lib/rte_power_mingw with a custom command 00:01:49.906 [270/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:50.169 [271/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:50.169 [272/745] Generating lib/rte_rawdev_def with a custom command 00:01:50.169 [273/745] Linking static target lib/librte_gpudev.a 00:01:50.169 [274/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:50.169 [275/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:50.169 [276/745] Linking static target lib/librte_gro.a 00:01:50.169 [277/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:50.169 [278/745] Generating lib/rte_regexdev_def with a custom command 00:01:50.169 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:50.169 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:50.169 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:50.169 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:50.169 [283/745] Generating lib/rte_rib_def with a custom command 00:01:50.169 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:50.169 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:50.169 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:50.434 [287/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.434 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:01:50.434 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:50.434 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.434 [291/745] Generating lib/rte_sched_def with a custom command 00:01:50.434 [292/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.434 [293/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:50.434 [294/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:50.434 [295/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:50.434 [296/745] Generating lib/rte_sched_mingw with a custom command 00:01:50.434 [297/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:50.742 [298/745] Generating lib/rte_security_def with a custom command 00:01:50.742 [299/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:50.742 [300/745] Generating lib/rte_security_mingw with a custom command 00:01:50.742 [301/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:50.742 [302/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:50.742 [303/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:50.742 [304/745] Linking static target lib/librte_latencystats.a 00:01:50.742 [305/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:50.742 [306/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:50.742 [307/745] Generating lib/rte_stack_def with a custom command 00:01:50.742 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:50.742 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:50.742 [310/745] Linking static target lib/librte_rawdev.a 00:01:50.742 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:50.742 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:50.742 [313/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:50.742 [314/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:50.742 [315/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:50.742 [316/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:50.742 [317/745] Linking static target lib/librte_stack.a 00:01:50.742 [318/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:50.742 [319/745] Generating lib/rte_vhost_def with a custom command 00:01:50.742 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:01:50.742 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:50.742 [322/745] Linking static target lib/librte_dmadev.a 00:01:50.742 [323/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:51.004 [324/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:51.004 [325/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.004 [326/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:51.004 [327/745] Linking static target lib/librte_ip_frag.a 00:01:51.004 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:51.004 [329/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:51.004 [330/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.004 [331/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:51.264 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:51.264 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:51.264 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.264 [335/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.524 [336/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.524 [337/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:51.524 [338/745] Generating lib/rte_fib_def with a custom command 00:01:51.524 [339/745] Generating lib/rte_fib_mingw with a custom command 00:01:51.524 [340/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:51.524 [341/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:51.524 [342/745] Linking static target lib/librte_regexdev.a 00:01:51.524 [343/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:51.524 [344/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:51.524 [345/745] Linking static target lib/librte_gso.a 00:01:51.787 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.787 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:51.787 [348/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:51.787 [349/745] Linking static target lib/librte_efd.a 00:01:51.787 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.054 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:52.054 [352/745] Linking static target lib/librte_pcapng.a 00:01:52.054 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:52.054 [354/745] Linking static target lib/librte_lpm.a 00:01:52.054 [355/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:52.054 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.054 [357/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:52.054 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.054 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.054 [360/745] Linking static target lib/librte_reorder.a 00:01:52.054 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.313 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.313 [363/745] Generating lib/rte_port_def with a custom command 00:01:52.313 [364/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:52.313 [365/745] Generating lib/rte_port_mingw with a custom command 00:01:52.313 [366/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.313 [367/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:52.313 [368/745] Linking static target lib/acl/libavx2_tmp.a 00:01:52.313 [369/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:52.313 [370/745] Generating lib/rte_pdump_def with a custom command 00:01:52.313 [371/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:52.313 [372/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.313 [373/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:52.313 [374/745] Generating lib/rte_pdump_mingw with a custom command 00:01:52.313 [375/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:52.581 [376/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:52.581 [377/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:52.581 [378/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.581 [379/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.581 [380/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.581 [381/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.581 [382/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:52.581 [383/745] Linking static target lib/librte_security.a 00:01:52.581 [384/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.581 [385/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.581 [386/745] Linking static target lib/librte_hash.a 00:01:52.581 [387/745] Linking static target lib/librte_power.a 00:01:52.581 [388/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.842 [389/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:52.842 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:52.842 [391/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:52.842 [392/745] Linking static target lib/librte_rib.a 00:01:52.842 [393/745] Linking static target lib/acl/libavx512_tmp.a 00:01:52.842 [394/745] Linking static target lib/librte_acl.a 00:01:52.842 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:52.842 [396/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:53.104 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:53.104 [398/745] Generating lib/rte_table_def with a custom command 00:01:53.104 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:53.104 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.364 [401/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.627 [402/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.627 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:53.627 [404/745] Linking static target lib/librte_ethdev.a 00:01:53.627 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.627 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:53.627 [407/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:53.627 [408/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.627 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:53.627 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:53.627 [411/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.627 [412/745] Generating lib/rte_pipeline_def with a custom command 00:01:53.627 [413/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:53.627 [414/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:53.891 [415/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:53.891 [416/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:53.891 [417/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:53.891 [418/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.891 [419/745] Linking static target lib/librte_mbuf.a 00:01:53.891 [420/745] Generating lib/rte_graph_def with a custom command 00:01:53.891 [421/745] Generating lib/rte_graph_mingw with a custom command 00:01:53.891 [422/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:53.891 [423/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:53.891 [424/745] Linking static target lib/librte_fib.a 00:01:54.153 [425/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:54.153 [426/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:54.153 [427/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.153 [428/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:54.153 [429/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:54.153 [430/745] Linking static target lib/librte_eventdev.a 00:01:54.153 [431/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:54.153 [432/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:54.153 [433/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:54.153 [434/745] Generating lib/rte_node_def with a custom command 00:01:54.153 [435/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:54.153 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:54.153 [437/745] Linking static target lib/librte_member.a 00:01:54.153 [438/745] Generating lib/rte_node_mingw with a custom command 00:01:54.153 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:54.414 [440/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:54.414 [441/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.414 [442/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:54.414 [443/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.414 [444/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:54.414 [445/745] Linking static target lib/librte_sched.a 00:01:54.674 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:54.674 [447/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:54.674 [448/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:54.674 [449/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:54.674 [450/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:54.674 [451/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.674 [452/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:54.674 [453/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:54.674 [454/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:54.674 [455/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:54.674 [456/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.674 [457/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:54.674 [458/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:54.937 [459/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:54.937 [460/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.937 [461/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:54.937 [462/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:54.937 [463/745] Linking static target lib/librte_pdump.a 00:01:54.937 [464/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:54.937 [465/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:54.937 [466/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:54.937 [467/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:54.937 [468/745] Linking static target lib/librte_cryptodev.a 00:01:54.937 [469/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:54.937 [470/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:54.937 [471/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:55.198 [472/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:55.199 [473/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.199 [474/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.199 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.199 [476/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:55.199 [477/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.199 [478/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:55.199 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:55.199 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:55.199 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:55.199 [482/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.459 [483/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:55.459 [484/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:55.459 [485/745] Linking static target lib/librte_table.a 00:01:55.459 [486/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.459 [487/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.459 [488/745] Linking static target drivers/librte_bus_vdev.a 00:01:55.459 [489/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:55.731 [490/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.731 [491/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:55.732 [492/745] Linking static target lib/librte_ipsec.a 00:01:55.732 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.732 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.989 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.989 [496/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:55.989 [497/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:55.989 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:55.989 [499/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:55.989 [500/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:55.989 [501/745] Linking static target lib/librte_graph.a 00:01:55.989 [502/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:56.251 [503/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:56.251 [504/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.251 [505/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:56.251 [506/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.251 [507/745] Linking static target drivers/librte_bus_pci.a 00:01:56.251 [508/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.251 [509/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.251 [510/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:56.251 [511/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:56.512 [512/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.512 [513/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:56.512 [514/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:56.769 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:56.769 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.038 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:57.038 [518/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:57.038 [519/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.038 [520/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:57.038 [521/745] Linking static target lib/librte_port.a 00:01:57.038 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:57.297 [523/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:57.297 [524/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:57.297 [525/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:57.297 [526/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:57.558 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.558 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:57.558 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:57.558 [530/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:57.558 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.558 [532/745] Linking static target drivers/librte_mempool_ring.a 00:01:57.558 [533/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:57.823 [534/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.823 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:57.823 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:57.823 [537/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:57.823 [538/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:58.084 [539/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.084 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:58.349 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.349 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:58.349 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:58.349 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:58.349 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:58.611 [546/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:58.611 [547/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:58.612 [548/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:58.612 [549/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:58.612 [550/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:58.875 [551/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:59.134 [552/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:59.134 [553/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:59.134 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:59.134 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:59.398 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:59.398 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:59.398 [558/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:59.663 [559/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:59.663 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:59.925 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:59.925 [562/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:59.925 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:59.925 [564/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:59.925 [565/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:59.925 [566/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:00.184 [567/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:00.184 [568/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:00.184 [569/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:00.184 [570/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.184 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:00.185 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:00.185 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:00.185 [574/745] Linking target lib/librte_eal.so.23.0 00:02:00.449 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:00.449 [576/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:00.449 [577/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:00.713 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:00.713 [579/745] Linking target lib/librte_ring.so.23.0 00:02:00.713 [580/745] Linking target lib/librte_meter.so.23.0 00:02:00.713 [581/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:00.713 [582/745] Linking target lib/librte_pci.so.23.0 00:02:00.713 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:00.713 [584/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:00.713 [585/745] Linking target lib/librte_timer.so.23.0 00:02:00.713 [586/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:00.713 [587/745] Linking target lib/librte_cfgfile.so.23.0 00:02:00.713 [588/745] Linking target lib/librte_acl.so.23.0 00:02:00.713 [589/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:00.713 [590/745] Linking target lib/librte_jobstats.so.23.0 00:02:00.713 [591/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:00.713 [592/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:00.975 [593/745] Linking target lib/librte_rawdev.so.23.0 00:02:00.975 [594/745] Linking target lib/librte_dmadev.so.23.0 00:02:00.975 [595/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:00.975 [596/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:00.975 [597/745] Linking target lib/librte_stack.so.23.0 00:02:00.975 [598/745] Linking target lib/librte_graph.so.23.0 00:02:00.975 [599/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.975 [600/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:00.975 [601/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:00.975 [602/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:00.975 [603/745] Linking target lib/librte_rcu.so.23.0 00:02:00.975 [604/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:00.975 [605/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:00.975 [606/745] Linking target lib/librte_mempool.so.23.0 00:02:00.975 [607/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:00.975 [608/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:00.975 [609/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:01.236 [610/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:01.236 [611/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:01.236 [612/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:01.237 [613/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:01.237 [614/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:01.237 [615/745] Linking target lib/librte_mbuf.so.23.0 00:02:01.237 [616/745] Linking target lib/librte_rib.so.23.0 00:02:01.237 [617/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:01.237 [618/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:01.500 [619/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:01.500 [620/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:01.500 [621/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:01.500 [622/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:01.500 [623/745] Linking target lib/librte_net.so.23.0 00:02:01.500 [624/745] Linking target lib/librte_bbdev.so.23.0 00:02:01.500 [625/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:01.500 [626/745] Linking target lib/librte_compressdev.so.23.0 00:02:01.761 [627/745] Linking target lib/librte_cryptodev.so.23.0 00:02:01.761 [628/745] Linking target lib/librte_distributor.so.23.0 00:02:01.761 [629/745] Linking target lib/librte_gpudev.so.23.0 00:02:01.761 [630/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:01.761 [631/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:01.761 [632/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:01.761 [633/745] Linking target lib/librte_reorder.so.23.0 00:02:01.761 [634/745] Linking target lib/librte_regexdev.so.23.0 00:02:01.761 [635/745] Linking target lib/librte_sched.so.23.0 00:02:01.761 [636/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:01.761 [637/745] Linking target lib/librte_fib.so.23.0 00:02:01.761 [638/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:01.761 [639/745] Linking target lib/librte_hash.so.23.0 00:02:01.761 [640/745] Linking target lib/librte_cmdline.so.23.0 00:02:01.761 [641/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:01.761 [642/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:02.023 [643/745] Linking target lib/librte_ethdev.so.23.0 00:02:02.023 [644/745] Linking target lib/librte_security.so.23.0 00:02:02.023 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:02.023 [646/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:02.023 [647/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:02.023 [648/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:02.023 [649/745] Linking target lib/librte_efd.so.23.0 00:02:02.023 [650/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:02.023 [651/745] Linking target lib/librte_lpm.so.23.0 00:02:02.023 [652/745] Linking target lib/librte_member.so.23.0 00:02:02.023 [653/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:02.281 [654/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:02.281 [655/745] Linking target lib/librte_pcapng.so.23.0 00:02:02.281 [656/745] Linking target lib/librte_gso.so.23.0 00:02:02.281 [657/745] Linking target lib/librte_eventdev.so.23.0 00:02:02.281 [658/745] Linking target lib/librte_bpf.so.23.0 00:02:02.281 [659/745] Linking target lib/librte_metrics.so.23.0 00:02:02.281 [660/745] Linking target lib/librte_ip_frag.so.23.0 00:02:02.281 [661/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:02.281 [662/745] Linking target lib/librte_gro.so.23.0 00:02:02.282 [663/745] Linking target lib/librte_power.so.23.0 00:02:02.282 [664/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:02.282 [665/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:02.282 [666/745] Linking target lib/librte_ipsec.so.23.0 00:02:02.282 [667/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:02.282 [668/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:02.282 [669/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:02.282 [670/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:02.282 [671/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:02.282 [672/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:02.282 [673/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:02.282 [674/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:02.282 [675/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:02.282 [676/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:02.282 [677/745] Linking target lib/librte_latencystats.so.23.0 00:02:02.540 [678/745] Linking target lib/librte_bitratestats.so.23.0 00:02:02.540 [679/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:02.540 [680/745] Linking target lib/librte_pdump.so.23.0 00:02:02.540 [681/745] Linking target lib/librte_port.so.23.0 00:02:02.540 [682/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:02.540 [683/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:02.540 [684/745] Linking target lib/librte_table.so.23.0 00:02:02.798 [685/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:02.798 [686/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:02.798 [687/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:02.798 [688/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:02.798 [689/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:02.798 [690/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:03.057 [691/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:03.057 [692/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:03.057 [693/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:03.057 [694/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:03.314 [695/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:03.314 [696/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:03.314 [697/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:03.572 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:03.829 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:03.829 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:04.086 [701/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:04.086 [702/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:04.086 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:04.651 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:04.651 [705/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:04.651 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:04.651 [707/745] Linking static target drivers/librte_net_i40e.a 00:02:04.651 [708/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:04.909 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:04.909 [710/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:05.166 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.166 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:05.732 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:05.732 [714/745] Linking static target lib/librte_node.a 00:02:05.990 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.990 [716/745] Linking target lib/librte_node.so.23.0 00:02:06.248 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:06.813 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:07.379 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:15.488 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.592 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.592 [722/745] Linking static target lib/librte_vhost.a 00:02:47.592 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.592 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:57.597 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:57.597 [726/745] Linking static target lib/librte_pipeline.a 00:02:57.597 [727/745] Linking target app/dpdk-dumpcap 00:02:57.597 [728/745] Linking target app/dpdk-test-regex 00:02:57.598 [729/745] Linking target app/dpdk-test-security-perf 00:02:57.598 [730/745] Linking target app/dpdk-test-crypto-perf 00:02:57.598 [731/745] Linking target app/dpdk-test-pipeline 00:02:57.598 [732/745] Linking target app/dpdk-pdump 00:02:57.598 [733/745] Linking target app/dpdk-test-acl 00:02:57.598 [734/745] Linking target app/dpdk-test-sad 00:02:57.598 [735/745] Linking target app/dpdk-test-bbdev 00:02:57.598 [736/745] Linking target app/dpdk-test-compress-perf 00:02:57.598 [737/745] Linking target app/dpdk-proc-info 00:02:57.598 [738/745] Linking target app/dpdk-test-cmdline 00:02:57.598 [739/745] Linking target app/dpdk-test-fib 00:02:57.598 [740/745] Linking target app/dpdk-test-flow-perf 00:02:57.598 [741/745] Linking target app/dpdk-test-gpudev 00:02:57.598 [742/745] Linking target app/dpdk-test-eventdev 00:02:57.598 [743/745] Linking target app/dpdk-testpmd 00:02:58.971 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.971 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:58.971 10:33:15 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:58.971 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:58.971 [0/1] Installing files. 00:02:59.233 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:59.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:59.236 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:59.237 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.238 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:59.239 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:59.239 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.239 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.239 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.239 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.239 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.239 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.239 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.239 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.239 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.239 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.239 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.497 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.497 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.497 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.497 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.497 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.497 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.498 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:59.756 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:59.756 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:59.756 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.756 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:59.756 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.756 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.757 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.758 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:00.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:00.019 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:00.019 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:00.019 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:00.019 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:00.019 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:00.019 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:00.019 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:00.019 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:00.019 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:00.019 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:00.019 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:00.019 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:00.019 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:00.019 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:00.019 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:00.019 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:00.019 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:00.019 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:00.019 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:00.019 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:00.019 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:00.019 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:00.019 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:00.019 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:00.019 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:00.019 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:00.019 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:00.019 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:00.019 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:00.019 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:00.019 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:00.019 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:00.019 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:00.019 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:00.019 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:00.019 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:00.019 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:00.019 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:00.019 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:00.019 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:00.019 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:00.019 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:00.019 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:00.019 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:00.019 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:00.019 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:00.019 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:00.019 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:00.019 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:00.019 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:00.019 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:00.019 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:00.020 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:00.020 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:00.020 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:00.020 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:00.020 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:00.020 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:00.020 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:00.020 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:00.020 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:00.020 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:00.020 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:00.020 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:00.020 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:00.020 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:00.020 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:00.020 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:00.020 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:00.020 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:00.020 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:00.020 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:00.020 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:00.020 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:00.020 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:00.020 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:00.020 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:00.020 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:00.020 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:00.020 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:00.020 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:00.020 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:00.020 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:00.020 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:00.020 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:00.020 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:00.020 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:00.020 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:00.020 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:00.020 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:00.020 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:00.020 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:00.020 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:00.020 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:00.020 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:00.020 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:00.020 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:00.020 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:00.020 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:00.020 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:00.020 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:00.020 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:00.020 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:00.020 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:00.020 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:00.020 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:00.020 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:00.020 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:00.020 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:00.020 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:00.020 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:00.020 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:00.020 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:00.020 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:00.020 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:00.020 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:00.020 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:00.020 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:00.020 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:00.020 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:00.020 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:00.020 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:00.020 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:00.020 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:00.020 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:00.020 10:33:16 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:00.020 10:33:16 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:00.020 10:33:16 -- common/autobuild_common.sh@200 -- $ cat 00:03:00.020 10:33:16 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:00.020 00:03:00.020 real 1m19.818s 00:03:00.020 user 14m20.080s 00:03:00.020 sys 1m48.351s 00:03:00.020 10:33:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:00.020 10:33:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.020 ************************************ 00:03:00.020 END TEST build_native_dpdk 00:03:00.020 ************************************ 00:03:00.020 10:33:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:00.020 10:33:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:00.020 10:33:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:00.020 10:33:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:00.020 10:33:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:00.020 10:33:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:00.020 10:33:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:00.020 10:33:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:00.020 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:00.279 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:00.279 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:00.279 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:00.537 Using 'verbs' RDMA provider 00:03:10.768 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:03:20.737 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:20.737 Creating mk/config.mk...done. 00:03:20.737 Creating mk/cc.flags.mk...done. 00:03:20.737 Type 'make' to build. 00:03:20.737 10:33:36 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:20.737 10:33:36 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:20.737 10:33:36 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:20.737 10:33:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.737 ************************************ 00:03:20.737 START TEST make 00:03:20.737 ************************************ 00:03:20.737 10:33:36 -- common/autotest_common.sh@1104 -- $ make -j48 00:03:20.737 make[1]: Nothing to be done for 'all'. 00:03:21.000 The Meson build system 00:03:21.000 Version: 1.3.1 00:03:21.000 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:21.000 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:21.000 Build type: native build 00:03:21.000 Project name: libvfio-user 00:03:21.000 Project version: 0.0.1 00:03:21.000 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:21.000 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:21.000 Host machine cpu family: x86_64 00:03:21.000 Host machine cpu: x86_64 00:03:21.000 Run-time dependency threads found: YES 00:03:21.000 Library dl found: YES 00:03:21.000 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:21.000 Run-time dependency json-c found: YES 0.17 00:03:21.000 Run-time dependency cmocka found: YES 1.1.7 00:03:21.000 Program pytest-3 found: NO 00:03:21.000 Program flake8 found: NO 00:03:21.000 Program misspell-fixer found: NO 00:03:21.000 Program restructuredtext-lint found: NO 00:03:21.000 Program valgrind found: YES (/usr/bin/valgrind) 00:03:21.000 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:21.000 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:21.000 Compiler for C supports arguments -Wwrite-strings: YES 00:03:21.000 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:21.000 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:21.000 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:21.000 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:21.000 Build targets in project: 8 00:03:21.000 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:21.000 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:21.000 00:03:21.000 libvfio-user 0.0.1 00:03:21.000 00:03:21.000 User defined options 00:03:21.000 buildtype : debug 00:03:21.000 default_library: shared 00:03:21.000 libdir : /usr/local/lib 00:03:21.000 00:03:21.000 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:21.580 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:21.848 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:21.848 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:21.848 [3/37] Compiling C object samples/null.p/null.c.o 00:03:21.848 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:21.848 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:22.107 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:22.107 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:22.107 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:22.107 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:22.107 [10/37] Compiling C object samples/server.p/server.c.o 00:03:22.107 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:22.107 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:22.107 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:22.107 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:22.107 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:22.107 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:22.107 [17/37] Compiling C object samples/client.p/client.c.o 00:03:22.107 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:22.107 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:22.107 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:22.107 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:22.107 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:22.107 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:22.107 [24/37] Linking target samples/client 00:03:22.107 [25/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:22.107 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:22.107 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:22.107 [28/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:22.107 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:22.371 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:22.371 [31/37] Linking target test/unit_tests 00:03:22.371 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:22.371 [33/37] Linking target samples/server 00:03:22.371 [34/37] Linking target samples/gpio-pci-idio-16 00:03:22.371 [35/37] Linking target samples/null 00:03:22.371 [36/37] Linking target samples/lspci 00:03:22.371 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:22.371 INFO: autodetecting backend as ninja 00:03:22.371 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:22.638 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:23.211 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:23.211 ninja: no work to do. 00:03:35.408 CC lib/ut/ut.o 00:03:35.408 CC lib/log/log.o 00:03:35.408 CC lib/log/log_flags.o 00:03:35.408 CC lib/log/log_deprecated.o 00:03:35.408 CC lib/ut_mock/mock.o 00:03:35.408 LIB libspdk_ut_mock.a 00:03:35.408 LIB libspdk_ut.a 00:03:35.408 LIB libspdk_log.a 00:03:35.408 SO libspdk_ut_mock.so.5.0 00:03:35.408 SO libspdk_ut.so.1.0 00:03:35.408 SO libspdk_log.so.6.1 00:03:35.408 SYMLINK libspdk_ut_mock.so 00:03:35.408 SYMLINK libspdk_ut.so 00:03:35.408 SYMLINK libspdk_log.so 00:03:35.408 CC lib/dma/dma.o 00:03:35.408 CXX lib/trace_parser/trace.o 00:03:35.408 CC lib/ioat/ioat.o 00:03:35.408 CC lib/util/base64.o 00:03:35.408 CC lib/util/bit_array.o 00:03:35.408 CC lib/util/cpuset.o 00:03:35.408 CC lib/util/crc16.o 00:03:35.408 CC lib/util/crc32.o 00:03:35.408 CC lib/util/crc32c.o 00:03:35.408 CC lib/util/crc32_ieee.o 00:03:35.408 CC lib/util/crc64.o 00:03:35.408 CC lib/util/dif.o 00:03:35.408 CC lib/util/fd.o 00:03:35.408 CC lib/util/file.o 00:03:35.408 CC lib/util/hexlify.o 00:03:35.408 CC lib/util/iov.o 00:03:35.408 CC lib/util/math.o 00:03:35.408 CC lib/util/pipe.o 00:03:35.408 CC lib/util/strerror_tls.o 00:03:35.409 CC lib/util/string.o 00:03:35.409 CC lib/util/uuid.o 00:03:35.409 CC lib/util/fd_group.o 00:03:35.409 CC lib/util/zipf.o 00:03:35.409 CC lib/util/xor.o 00:03:35.409 CC lib/vfio_user/host/vfio_user_pci.o 00:03:35.409 CC lib/vfio_user/host/vfio_user.o 00:03:35.409 LIB libspdk_dma.a 00:03:35.409 SO libspdk_dma.so.3.0 00:03:35.409 SYMLINK libspdk_dma.so 00:03:35.409 LIB libspdk_ioat.a 00:03:35.409 SO libspdk_ioat.so.6.0 00:03:35.409 LIB libspdk_vfio_user.a 00:03:35.409 SO libspdk_vfio_user.so.4.0 00:03:35.409 SYMLINK libspdk_ioat.so 00:03:35.409 SYMLINK libspdk_vfio_user.so 00:03:35.409 LIB libspdk_util.a 00:03:35.409 SO libspdk_util.so.8.0 00:03:35.667 SYMLINK libspdk_util.so 00:03:35.667 CC lib/rdma/common.o 00:03:35.667 CC lib/conf/conf.o 00:03:35.667 CC lib/vmd/vmd.o 00:03:35.667 CC lib/json/json_parse.o 00:03:35.667 CC lib/idxd/idxd.o 00:03:35.667 CC lib/env_dpdk/env.o 00:03:35.667 CC lib/vmd/led.o 00:03:35.667 CC lib/rdma/rdma_verbs.o 00:03:35.667 CC lib/env_dpdk/memory.o 00:03:35.667 CC lib/idxd/idxd_user.o 00:03:35.667 CC lib/json/json_util.o 00:03:35.667 CC lib/env_dpdk/pci.o 00:03:35.667 CC lib/idxd/idxd_kernel.o 00:03:35.667 CC lib/json/json_write.o 00:03:35.667 CC lib/env_dpdk/init.o 00:03:35.667 CC lib/env_dpdk/threads.o 00:03:35.667 CC lib/env_dpdk/pci_ioat.o 00:03:35.667 CC lib/env_dpdk/pci_virtio.o 00:03:35.667 CC lib/env_dpdk/pci_vmd.o 00:03:35.667 CC lib/env_dpdk/pci_idxd.o 00:03:35.667 CC lib/env_dpdk/pci_event.o 00:03:35.667 CC lib/env_dpdk/sigbus_handler.o 00:03:35.667 CC lib/env_dpdk/pci_dpdk.o 00:03:35.667 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:35.667 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:35.925 LIB libspdk_trace_parser.a 00:03:35.925 SO libspdk_trace_parser.so.4.0 00:03:35.925 SYMLINK libspdk_trace_parser.so 00:03:35.925 LIB libspdk_conf.a 00:03:36.182 SO libspdk_conf.so.5.0 00:03:36.182 LIB libspdk_rdma.a 00:03:36.182 SYMLINK libspdk_conf.so 00:03:36.182 LIB libspdk_json.a 00:03:36.182 SO libspdk_rdma.so.5.0 00:03:36.182 SO libspdk_json.so.5.1 00:03:36.182 SYMLINK libspdk_rdma.so 00:03:36.182 SYMLINK libspdk_json.so 00:03:36.440 CC lib/jsonrpc/jsonrpc_server.o 00:03:36.440 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:36.440 CC lib/jsonrpc/jsonrpc_client.o 00:03:36.440 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:36.440 LIB libspdk_idxd.a 00:03:36.440 SO libspdk_idxd.so.11.0 00:03:36.440 SYMLINK libspdk_idxd.so 00:03:36.440 LIB libspdk_vmd.a 00:03:36.440 SO libspdk_vmd.so.5.0 00:03:36.698 LIB libspdk_jsonrpc.a 00:03:36.698 SYMLINK libspdk_vmd.so 00:03:36.698 SO libspdk_jsonrpc.so.5.1 00:03:36.698 SYMLINK libspdk_jsonrpc.so 00:03:36.698 CC lib/rpc/rpc.o 00:03:36.955 LIB libspdk_rpc.a 00:03:36.955 SO libspdk_rpc.so.5.0 00:03:36.955 SYMLINK libspdk_rpc.so 00:03:37.214 CC lib/trace/trace.o 00:03:37.214 CC lib/sock/sock.o 00:03:37.214 CC lib/trace/trace_flags.o 00:03:37.214 CC lib/notify/notify.o 00:03:37.214 CC lib/sock/sock_rpc.o 00:03:37.214 CC lib/trace/trace_rpc.o 00:03:37.214 CC lib/notify/notify_rpc.o 00:03:37.214 LIB libspdk_notify.a 00:03:37.214 SO libspdk_notify.so.5.0 00:03:37.472 LIB libspdk_trace.a 00:03:37.472 SYMLINK libspdk_notify.so 00:03:37.472 SO libspdk_trace.so.9.0 00:03:37.472 SYMLINK libspdk_trace.so 00:03:37.472 LIB libspdk_sock.a 00:03:37.472 CC lib/thread/thread.o 00:03:37.472 CC lib/thread/iobuf.o 00:03:37.472 SO libspdk_sock.so.8.0 00:03:37.731 SYMLINK libspdk_sock.so 00:03:37.731 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:37.731 CC lib/nvme/nvme_ctrlr.o 00:03:37.731 CC lib/nvme/nvme_fabric.o 00:03:37.731 CC lib/nvme/nvme_ns_cmd.o 00:03:37.731 CC lib/nvme/nvme_ns.o 00:03:37.731 CC lib/nvme/nvme_pcie_common.o 00:03:37.731 CC lib/nvme/nvme_pcie.o 00:03:37.731 CC lib/nvme/nvme_qpair.o 00:03:37.731 CC lib/nvme/nvme.o 00:03:37.731 CC lib/nvme/nvme_quirks.o 00:03:37.731 CC lib/nvme/nvme_transport.o 00:03:37.731 CC lib/nvme/nvme_discovery.o 00:03:37.731 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:37.731 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:37.731 CC lib/nvme/nvme_tcp.o 00:03:37.731 CC lib/nvme/nvme_opal.o 00:03:37.731 CC lib/nvme/nvme_io_msg.o 00:03:37.731 CC lib/nvme/nvme_poll_group.o 00:03:37.731 CC lib/nvme/nvme_zns.o 00:03:37.731 CC lib/nvme/nvme_cuse.o 00:03:37.731 CC lib/nvme/nvme_vfio_user.o 00:03:37.731 CC lib/nvme/nvme_rdma.o 00:03:37.731 LIB libspdk_env_dpdk.a 00:03:37.989 SO libspdk_env_dpdk.so.13.0 00:03:37.989 SYMLINK libspdk_env_dpdk.so 00:03:39.366 LIB libspdk_thread.a 00:03:39.366 SO libspdk_thread.so.9.0 00:03:39.366 SYMLINK libspdk_thread.so 00:03:39.366 CC lib/blob/blobstore.o 00:03:39.366 CC lib/vfu_tgt/tgt_endpoint.o 00:03:39.366 CC lib/init/json_config.o 00:03:39.366 CC lib/accel/accel.o 00:03:39.366 CC lib/virtio/virtio.o 00:03:39.366 CC lib/init/subsystem.o 00:03:39.366 CC lib/vfu_tgt/tgt_rpc.o 00:03:39.366 CC lib/blob/request.o 00:03:39.366 CC lib/accel/accel_rpc.o 00:03:39.366 CC lib/virtio/virtio_vhost_user.o 00:03:39.366 CC lib/init/subsystem_rpc.o 00:03:39.366 CC lib/blob/zeroes.o 00:03:39.366 CC lib/accel/accel_sw.o 00:03:39.366 CC lib/virtio/virtio_vfio_user.o 00:03:39.366 CC lib/init/rpc.o 00:03:39.366 CC lib/blob/blob_bs_dev.o 00:03:39.366 CC lib/virtio/virtio_pci.o 00:03:39.624 LIB libspdk_init.a 00:03:39.624 SO libspdk_init.so.4.0 00:03:39.624 LIB libspdk_vfu_tgt.a 00:03:39.624 SYMLINK libspdk_init.so 00:03:39.624 LIB libspdk_virtio.a 00:03:39.624 SO libspdk_vfu_tgt.so.2.0 00:03:39.624 SO libspdk_virtio.so.6.0 00:03:39.882 SYMLINK libspdk_vfu_tgt.so 00:03:39.882 SYMLINK libspdk_virtio.so 00:03:39.882 CC lib/event/app.o 00:03:39.882 CC lib/event/reactor.o 00:03:39.882 CC lib/event/log_rpc.o 00:03:39.882 CC lib/event/app_rpc.o 00:03:39.882 CC lib/event/scheduler_static.o 00:03:40.139 LIB libspdk_nvme.a 00:03:40.139 SO libspdk_nvme.so.12.0 00:03:40.139 LIB libspdk_event.a 00:03:40.139 SO libspdk_event.so.12.0 00:03:40.398 SYMLINK libspdk_event.so 00:03:40.398 LIB libspdk_accel.a 00:03:40.398 SO libspdk_accel.so.14.0 00:03:40.398 SYMLINK libspdk_nvme.so 00:03:40.398 SYMLINK libspdk_accel.so 00:03:40.656 CC lib/bdev/bdev.o 00:03:40.656 CC lib/bdev/bdev_rpc.o 00:03:40.656 CC lib/bdev/bdev_zone.o 00:03:40.656 CC lib/bdev/part.o 00:03:40.656 CC lib/bdev/scsi_nvme.o 00:03:42.034 LIB libspdk_blob.a 00:03:42.034 SO libspdk_blob.so.10.1 00:03:42.034 SYMLINK libspdk_blob.so 00:03:42.292 CC lib/blobfs/blobfs.o 00:03:42.292 CC lib/blobfs/tree.o 00:03:42.292 CC lib/lvol/lvol.o 00:03:43.240 LIB libspdk_bdev.a 00:03:43.240 LIB libspdk_blobfs.a 00:03:43.240 SO libspdk_blobfs.so.9.0 00:03:43.240 SO libspdk_bdev.so.14.0 00:03:43.240 LIB libspdk_lvol.a 00:03:43.240 SYMLINK libspdk_blobfs.so 00:03:43.240 SO libspdk_lvol.so.9.1 00:03:43.240 SYMLINK libspdk_bdev.so 00:03:43.240 SYMLINK libspdk_lvol.so 00:03:43.240 CC lib/nbd/nbd.o 00:03:43.240 CC lib/nbd/nbd_rpc.o 00:03:43.240 CC lib/ublk/ublk.o 00:03:43.240 CC lib/scsi/lun.o 00:03:43.240 CC lib/scsi/dev.o 00:03:43.240 CC lib/ublk/ublk_rpc.o 00:03:43.240 CC lib/scsi/port.o 00:03:43.240 CC lib/scsi/scsi.o 00:03:43.240 CC lib/ftl/ftl_core.o 00:03:43.240 CC lib/ftl/ftl_init.o 00:03:43.240 CC lib/nvmf/ctrlr_discovery.o 00:03:43.240 CC lib/scsi/scsi_bdev.o 00:03:43.240 CC lib/nvmf/ctrlr.o 00:03:43.240 CC lib/ftl/ftl_layout.o 00:03:43.240 CC lib/nvmf/ctrlr_bdev.o 00:03:43.240 CC lib/ftl/ftl_debug.o 00:03:43.240 CC lib/nvmf/subsystem.o 00:03:43.240 CC lib/ftl/ftl_io.o 00:03:43.240 CC lib/nvmf/nvmf.o 00:03:43.240 CC lib/ftl/ftl_sb.o 00:03:43.240 CC lib/ftl/ftl_l2p.o 00:03:43.240 CC lib/scsi/scsi_pr.o 00:03:43.240 CC lib/scsi/scsi_rpc.o 00:03:43.240 CC lib/nvmf/transport.o 00:03:43.240 CC lib/nvmf/nvmf_rpc.o 00:03:43.240 CC lib/ftl/ftl_l2p_flat.o 00:03:43.240 CC lib/scsi/task.o 00:03:43.240 CC lib/nvmf/tcp.o 00:03:43.240 CC lib/nvmf/vfio_user.o 00:03:43.240 CC lib/ftl/ftl_band.o 00:03:43.240 CC lib/ftl/ftl_nv_cache.o 00:03:43.240 CC lib/nvmf/rdma.o 00:03:43.240 CC lib/ftl/ftl_band_ops.o 00:03:43.240 CC lib/ftl/ftl_writer.o 00:03:43.240 CC lib/ftl/ftl_rq.o 00:03:43.240 CC lib/ftl/ftl_reloc.o 00:03:43.240 CC lib/ftl/ftl_l2p_cache.o 00:03:43.240 CC lib/ftl/ftl_p2l.o 00:03:43.240 CC lib/ftl/mngt/ftl_mngt.o 00:03:43.240 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:43.240 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:43.240 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:43.240 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:43.240 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:43.240 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:43.240 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:43.240 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:43.240 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:43.505 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:43.505 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:43.505 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:43.768 CC lib/ftl/utils/ftl_conf.o 00:03:43.768 CC lib/ftl/utils/ftl_md.o 00:03:43.768 CC lib/ftl/utils/ftl_mempool.o 00:03:43.768 CC lib/ftl/utils/ftl_bitmap.o 00:03:43.768 CC lib/ftl/utils/ftl_property.o 00:03:43.768 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:43.768 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:43.768 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:43.768 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:43.768 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:43.768 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:43.768 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:43.768 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:43.768 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:43.768 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:43.768 CC lib/ftl/base/ftl_base_dev.o 00:03:43.768 CC lib/ftl/base/ftl_base_bdev.o 00:03:43.768 CC lib/ftl/ftl_trace.o 00:03:44.027 LIB libspdk_nbd.a 00:03:44.027 SO libspdk_nbd.so.6.0 00:03:44.027 LIB libspdk_scsi.a 00:03:44.285 SYMLINK libspdk_nbd.so 00:03:44.285 SO libspdk_scsi.so.8.0 00:03:44.285 SYMLINK libspdk_scsi.so 00:03:44.285 LIB libspdk_ublk.a 00:03:44.285 SO libspdk_ublk.so.2.0 00:03:44.285 SYMLINK libspdk_ublk.so 00:03:44.285 CC lib/vhost/vhost.o 00:03:44.285 CC lib/iscsi/conn.o 00:03:44.285 CC lib/iscsi/init_grp.o 00:03:44.285 CC lib/vhost/vhost_rpc.o 00:03:44.285 CC lib/vhost/vhost_scsi.o 00:03:44.285 CC lib/iscsi/iscsi.o 00:03:44.285 CC lib/vhost/vhost_blk.o 00:03:44.285 CC lib/iscsi/md5.o 00:03:44.285 CC lib/vhost/rte_vhost_user.o 00:03:44.285 CC lib/iscsi/param.o 00:03:44.285 CC lib/iscsi/portal_grp.o 00:03:44.285 CC lib/iscsi/tgt_node.o 00:03:44.285 CC lib/iscsi/iscsi_subsystem.o 00:03:44.285 CC lib/iscsi/iscsi_rpc.o 00:03:44.285 CC lib/iscsi/task.o 00:03:44.543 LIB libspdk_ftl.a 00:03:44.801 SO libspdk_ftl.so.8.0 00:03:45.058 SYMLINK libspdk_ftl.so 00:03:45.624 LIB libspdk_vhost.a 00:03:45.624 SO libspdk_vhost.so.7.1 00:03:45.624 SYMLINK libspdk_vhost.so 00:03:45.882 LIB libspdk_iscsi.a 00:03:45.882 LIB libspdk_nvmf.a 00:03:45.882 SO libspdk_iscsi.so.7.0 00:03:45.882 SO libspdk_nvmf.so.17.0 00:03:45.882 SYMLINK libspdk_iscsi.so 00:03:46.140 SYMLINK libspdk_nvmf.so 00:03:46.140 CC module/vfu_device/vfu_virtio.o 00:03:46.140 CC module/vfu_device/vfu_virtio_blk.o 00:03:46.140 CC module/env_dpdk/env_dpdk_rpc.o 00:03:46.140 CC module/vfu_device/vfu_virtio_scsi.o 00:03:46.140 CC module/vfu_device/vfu_virtio_rpc.o 00:03:46.397 CC module/sock/posix/posix.o 00:03:46.397 CC module/accel/ioat/accel_ioat.o 00:03:46.397 CC module/accel/error/accel_error.o 00:03:46.397 CC module/blob/bdev/blob_bdev.o 00:03:46.397 CC module/accel/ioat/accel_ioat_rpc.o 00:03:46.397 CC module/accel/iaa/accel_iaa.o 00:03:46.397 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:46.397 CC module/accel/iaa/accel_iaa_rpc.o 00:03:46.397 CC module/scheduler/gscheduler/gscheduler.o 00:03:46.397 CC module/accel/error/accel_error_rpc.o 00:03:46.397 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:46.397 CC module/accel/dsa/accel_dsa.o 00:03:46.397 CC module/accel/dsa/accel_dsa_rpc.o 00:03:46.397 LIB libspdk_env_dpdk_rpc.a 00:03:46.397 SO libspdk_env_dpdk_rpc.so.5.0 00:03:46.397 SYMLINK libspdk_env_dpdk_rpc.so 00:03:46.398 LIB libspdk_scheduler_gscheduler.a 00:03:46.398 LIB libspdk_scheduler_dpdk_governor.a 00:03:46.398 SO libspdk_scheduler_gscheduler.so.3.0 00:03:46.398 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:46.398 LIB libspdk_accel_error.a 00:03:46.398 LIB libspdk_accel_ioat.a 00:03:46.398 LIB libspdk_scheduler_dynamic.a 00:03:46.655 LIB libspdk_accel_iaa.a 00:03:46.655 SO libspdk_accel_error.so.1.0 00:03:46.655 SO libspdk_scheduler_dynamic.so.3.0 00:03:46.655 SO libspdk_accel_ioat.so.5.0 00:03:46.655 SYMLINK libspdk_scheduler_gscheduler.so 00:03:46.655 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:46.655 SO libspdk_accel_iaa.so.2.0 00:03:46.655 LIB libspdk_accel_dsa.a 00:03:46.655 SYMLINK libspdk_scheduler_dynamic.so 00:03:46.655 SYMLINK libspdk_accel_error.so 00:03:46.655 LIB libspdk_blob_bdev.a 00:03:46.655 SO libspdk_accel_dsa.so.4.0 00:03:46.655 SYMLINK libspdk_accel_ioat.so 00:03:46.655 SO libspdk_blob_bdev.so.10.1 00:03:46.655 SYMLINK libspdk_accel_iaa.so 00:03:46.655 SYMLINK libspdk_accel_dsa.so 00:03:46.655 SYMLINK libspdk_blob_bdev.so 00:03:46.913 CC module/bdev/error/vbdev_error.o 00:03:46.913 CC module/bdev/null/bdev_null.o 00:03:46.913 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:46.913 CC module/bdev/gpt/gpt.o 00:03:46.913 CC module/bdev/gpt/vbdev_gpt.o 00:03:46.913 CC module/bdev/malloc/bdev_malloc.o 00:03:46.913 CC module/bdev/error/vbdev_error_rpc.o 00:03:46.913 CC module/bdev/lvol/vbdev_lvol.o 00:03:46.913 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:46.913 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:46.913 CC module/blobfs/bdev/blobfs_bdev.o 00:03:46.913 CC module/bdev/delay/vbdev_delay.o 00:03:46.913 CC module/bdev/aio/bdev_aio.o 00:03:46.913 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:46.913 CC module/bdev/null/bdev_null_rpc.o 00:03:46.913 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:46.913 CC module/bdev/raid/bdev_raid.o 00:03:46.913 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:46.913 CC module/bdev/aio/bdev_aio_rpc.o 00:03:46.913 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:46.913 CC module/bdev/raid/bdev_raid_rpc.o 00:03:46.913 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:46.913 CC module/bdev/raid/bdev_raid_sb.o 00:03:46.913 CC module/bdev/raid/raid0.o 00:03:46.913 CC module/bdev/ftl/bdev_ftl.o 00:03:46.913 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:46.913 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:46.913 CC module/bdev/raid/raid1.o 00:03:46.913 CC module/bdev/nvme/bdev_nvme.o 00:03:46.913 CC module/bdev/split/vbdev_split.o 00:03:46.913 CC module/bdev/raid/concat.o 00:03:46.913 CC module/bdev/passthru/vbdev_passthru.o 00:03:46.913 CC module/bdev/split/vbdev_split_rpc.o 00:03:46.913 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:46.913 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:46.913 CC module/bdev/nvme/nvme_rpc.o 00:03:46.913 CC module/bdev/iscsi/bdev_iscsi.o 00:03:46.913 CC module/bdev/nvme/bdev_mdns_client.o 00:03:46.913 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:46.913 CC module/bdev/nvme/vbdev_opal.o 00:03:46.913 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:46.913 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:46.913 LIB libspdk_vfu_device.a 00:03:46.913 SO libspdk_vfu_device.so.2.0 00:03:47.171 SYMLINK libspdk_vfu_device.so 00:03:47.171 LIB libspdk_blobfs_bdev.a 00:03:47.171 LIB libspdk_sock_posix.a 00:03:47.171 SO libspdk_blobfs_bdev.so.5.0 00:03:47.171 SO libspdk_sock_posix.so.5.0 00:03:47.171 LIB libspdk_bdev_split.a 00:03:47.171 LIB libspdk_bdev_iscsi.a 00:03:47.171 LIB libspdk_bdev_null.a 00:03:47.171 SYMLINK libspdk_blobfs_bdev.so 00:03:47.171 SO libspdk_bdev_split.so.5.0 00:03:47.428 LIB libspdk_bdev_passthru.a 00:03:47.428 SO libspdk_bdev_iscsi.so.5.0 00:03:47.428 SO libspdk_bdev_null.so.5.0 00:03:47.428 SYMLINK libspdk_sock_posix.so 00:03:47.428 LIB libspdk_bdev_error.a 00:03:47.428 LIB libspdk_bdev_gpt.a 00:03:47.428 SO libspdk_bdev_passthru.so.5.0 00:03:47.428 SYMLINK libspdk_bdev_split.so 00:03:47.428 SO libspdk_bdev_error.so.5.0 00:03:47.428 SO libspdk_bdev_gpt.so.5.0 00:03:47.428 LIB libspdk_bdev_zone_block.a 00:03:47.428 SYMLINK libspdk_bdev_null.so 00:03:47.428 SYMLINK libspdk_bdev_iscsi.so 00:03:47.428 LIB libspdk_bdev_ftl.a 00:03:47.428 LIB libspdk_bdev_malloc.a 00:03:47.429 LIB libspdk_bdev_aio.a 00:03:47.429 SO libspdk_bdev_zone_block.so.5.0 00:03:47.429 SYMLINK libspdk_bdev_passthru.so 00:03:47.429 SO libspdk_bdev_ftl.so.5.0 00:03:47.429 SO libspdk_bdev_malloc.so.5.0 00:03:47.429 SYMLINK libspdk_bdev_error.so 00:03:47.429 SYMLINK libspdk_bdev_gpt.so 00:03:47.429 SO libspdk_bdev_aio.so.5.0 00:03:47.429 LIB libspdk_bdev_delay.a 00:03:47.429 SYMLINK libspdk_bdev_zone_block.so 00:03:47.429 SYMLINK libspdk_bdev_ftl.so 00:03:47.429 SYMLINK libspdk_bdev_malloc.so 00:03:47.429 SYMLINK libspdk_bdev_aio.so 00:03:47.429 SO libspdk_bdev_delay.so.5.0 00:03:47.429 SYMLINK libspdk_bdev_delay.so 00:03:47.429 LIB libspdk_bdev_lvol.a 00:03:47.686 LIB libspdk_bdev_virtio.a 00:03:47.686 SO libspdk_bdev_lvol.so.5.0 00:03:47.686 SO libspdk_bdev_virtio.so.5.0 00:03:47.686 SYMLINK libspdk_bdev_lvol.so 00:03:47.686 SYMLINK libspdk_bdev_virtio.so 00:03:47.944 LIB libspdk_bdev_raid.a 00:03:47.944 SO libspdk_bdev_raid.so.5.0 00:03:47.944 SYMLINK libspdk_bdev_raid.so 00:03:48.877 LIB libspdk_bdev_nvme.a 00:03:49.145 SO libspdk_bdev_nvme.so.6.0 00:03:49.145 SYMLINK libspdk_bdev_nvme.so 00:03:49.430 CC module/event/subsystems/sock/sock.o 00:03:49.430 CC module/event/subsystems/iobuf/iobuf.o 00:03:49.430 CC module/event/subsystems/vmd/vmd.o 00:03:49.431 CC module/event/subsystems/scheduler/scheduler.o 00:03:49.431 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:49.431 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:49.431 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:49.431 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:49.431 LIB libspdk_event_sock.a 00:03:49.431 LIB libspdk_event_vhost_blk.a 00:03:49.689 LIB libspdk_event_vfu_tgt.a 00:03:49.689 LIB libspdk_event_scheduler.a 00:03:49.689 LIB libspdk_event_vmd.a 00:03:49.689 SO libspdk_event_sock.so.4.0 00:03:49.689 LIB libspdk_event_iobuf.a 00:03:49.689 SO libspdk_event_vhost_blk.so.2.0 00:03:49.689 SO libspdk_event_scheduler.so.3.0 00:03:49.689 SO libspdk_event_vfu_tgt.so.2.0 00:03:49.689 SO libspdk_event_vmd.so.5.0 00:03:49.689 SO libspdk_event_iobuf.so.2.0 00:03:49.689 SYMLINK libspdk_event_sock.so 00:03:49.689 SYMLINK libspdk_event_vhost_blk.so 00:03:49.689 SYMLINK libspdk_event_vfu_tgt.so 00:03:49.689 SYMLINK libspdk_event_scheduler.so 00:03:49.689 SYMLINK libspdk_event_vmd.so 00:03:49.689 SYMLINK libspdk_event_iobuf.so 00:03:49.689 CC module/event/subsystems/accel/accel.o 00:03:49.948 LIB libspdk_event_accel.a 00:03:49.948 SO libspdk_event_accel.so.5.0 00:03:49.948 SYMLINK libspdk_event_accel.so 00:03:50.205 CC module/event/subsystems/bdev/bdev.o 00:03:50.205 LIB libspdk_event_bdev.a 00:03:50.205 SO libspdk_event_bdev.so.5.0 00:03:50.462 SYMLINK libspdk_event_bdev.so 00:03:50.462 CC module/event/subsystems/nbd/nbd.o 00:03:50.462 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:50.462 CC module/event/subsystems/ublk/ublk.o 00:03:50.462 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:50.462 CC module/event/subsystems/scsi/scsi.o 00:03:50.720 LIB libspdk_event_nbd.a 00:03:50.720 LIB libspdk_event_ublk.a 00:03:50.720 SO libspdk_event_nbd.so.5.0 00:03:50.720 SO libspdk_event_ublk.so.2.0 00:03:50.720 LIB libspdk_event_scsi.a 00:03:50.720 SO libspdk_event_scsi.so.5.0 00:03:50.720 SYMLINK libspdk_event_nbd.so 00:03:50.720 SYMLINK libspdk_event_ublk.so 00:03:50.720 LIB libspdk_event_nvmf.a 00:03:50.720 SYMLINK libspdk_event_scsi.so 00:03:50.720 SO libspdk_event_nvmf.so.5.0 00:03:50.720 SYMLINK libspdk_event_nvmf.so 00:03:50.720 CC module/event/subsystems/iscsi/iscsi.o 00:03:50.720 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:50.978 LIB libspdk_event_iscsi.a 00:03:50.978 LIB libspdk_event_vhost_scsi.a 00:03:50.978 SO libspdk_event_iscsi.so.5.0 00:03:50.978 SO libspdk_event_vhost_scsi.so.2.0 00:03:50.978 SYMLINK libspdk_event_iscsi.so 00:03:50.978 SYMLINK libspdk_event_vhost_scsi.so 00:03:51.237 SO libspdk.so.5.0 00:03:51.237 SYMLINK libspdk.so 00:03:51.237 CC app/trace_record/trace_record.o 00:03:51.237 CC app/spdk_nvme_identify/identify.o 00:03:51.237 CC app/spdk_nvme_perf/perf.o 00:03:51.237 CC app/spdk_lspci/spdk_lspci.o 00:03:51.237 CXX app/trace/trace.o 00:03:51.237 CC app/spdk_top/spdk_top.o 00:03:51.237 CC app/spdk_nvme_discover/discovery_aer.o 00:03:51.237 TEST_HEADER include/spdk/accel.h 00:03:51.237 TEST_HEADER include/spdk/accel_module.h 00:03:51.237 CC test/rpc_client/rpc_client_test.o 00:03:51.237 TEST_HEADER include/spdk/assert.h 00:03:51.237 TEST_HEADER include/spdk/barrier.h 00:03:51.237 TEST_HEADER include/spdk/base64.h 00:03:51.237 TEST_HEADER include/spdk/bdev.h 00:03:51.237 TEST_HEADER include/spdk/bdev_module.h 00:03:51.237 TEST_HEADER include/spdk/bdev_zone.h 00:03:51.237 TEST_HEADER include/spdk/bit_array.h 00:03:51.237 TEST_HEADER include/spdk/bit_pool.h 00:03:51.237 TEST_HEADER include/spdk/blob_bdev.h 00:03:51.237 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:51.237 TEST_HEADER include/spdk/blobfs.h 00:03:51.237 TEST_HEADER include/spdk/blob.h 00:03:51.237 TEST_HEADER include/spdk/conf.h 00:03:51.237 TEST_HEADER include/spdk/config.h 00:03:51.237 CC app/spdk_dd/spdk_dd.o 00:03:51.237 TEST_HEADER include/spdk/cpuset.h 00:03:51.496 TEST_HEADER include/spdk/crc16.h 00:03:51.496 TEST_HEADER include/spdk/crc32.h 00:03:51.496 CC app/iscsi_tgt/iscsi_tgt.o 00:03:51.496 TEST_HEADER include/spdk/crc64.h 00:03:51.496 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:51.496 TEST_HEADER include/spdk/dif.h 00:03:51.496 TEST_HEADER include/spdk/dma.h 00:03:51.496 CC app/nvmf_tgt/nvmf_main.o 00:03:51.496 TEST_HEADER include/spdk/endian.h 00:03:51.496 CC app/vhost/vhost.o 00:03:51.496 TEST_HEADER include/spdk/env_dpdk.h 00:03:51.496 TEST_HEADER include/spdk/env.h 00:03:51.496 TEST_HEADER include/spdk/event.h 00:03:51.496 CC app/fio/nvme/fio_plugin.o 00:03:51.496 TEST_HEADER include/spdk/fd_group.h 00:03:51.496 CC examples/ioat/perf/perf.o 00:03:51.496 CC examples/vmd/lsvmd/lsvmd.o 00:03:51.496 CC test/env/vtophys/vtophys.o 00:03:51.496 CC test/env/memory/memory_ut.o 00:03:51.496 TEST_HEADER include/spdk/fd.h 00:03:51.496 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:51.496 CC examples/nvme/reconnect/reconnect.o 00:03:51.496 TEST_HEADER include/spdk/file.h 00:03:51.496 CC test/thread/poller_perf/poller_perf.o 00:03:51.496 CC test/env/pci/pci_ut.o 00:03:51.496 CC examples/nvme/arbitration/arbitration.o 00:03:51.496 TEST_HEADER include/spdk/ftl.h 00:03:51.496 CC examples/util/zipf/zipf.o 00:03:51.496 CC test/event/event_perf/event_perf.o 00:03:51.496 CC examples/nvme/hello_world/hello_world.o 00:03:51.496 TEST_HEADER include/spdk/gpt_spec.h 00:03:51.496 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:51.496 CC test/nvme/aer/aer.o 00:03:51.496 TEST_HEADER include/spdk/hexlify.h 00:03:51.496 CC examples/ioat/verify/verify.o 00:03:51.496 CC examples/accel/perf/accel_perf.o 00:03:51.497 CC app/spdk_tgt/spdk_tgt.o 00:03:51.497 CC examples/idxd/perf/perf.o 00:03:51.497 TEST_HEADER include/spdk/histogram_data.h 00:03:51.497 CC examples/sock/hello_world/hello_sock.o 00:03:51.497 TEST_HEADER include/spdk/idxd.h 00:03:51.497 TEST_HEADER include/spdk/idxd_spec.h 00:03:51.497 TEST_HEADER include/spdk/init.h 00:03:51.497 TEST_HEADER include/spdk/ioat.h 00:03:51.497 TEST_HEADER include/spdk/ioat_spec.h 00:03:51.497 TEST_HEADER include/spdk/iscsi_spec.h 00:03:51.497 TEST_HEADER include/spdk/json.h 00:03:51.497 TEST_HEADER include/spdk/jsonrpc.h 00:03:51.497 TEST_HEADER include/spdk/likely.h 00:03:51.497 TEST_HEADER include/spdk/log.h 00:03:51.497 TEST_HEADER include/spdk/lvol.h 00:03:51.497 CC app/fio/bdev/fio_plugin.o 00:03:51.497 TEST_HEADER include/spdk/memory.h 00:03:51.497 TEST_HEADER include/spdk/mmio.h 00:03:51.497 TEST_HEADER include/spdk/nbd.h 00:03:51.497 CC examples/bdev/hello_world/hello_bdev.o 00:03:51.497 CC examples/thread/thread/thread_ex.o 00:03:51.497 CC test/accel/dif/dif.o 00:03:51.497 TEST_HEADER include/spdk/notify.h 00:03:51.497 CC examples/nvmf/nvmf/nvmf.o 00:03:51.497 CC test/blobfs/mkfs/mkfs.o 00:03:51.497 CC test/bdev/bdevio/bdevio.o 00:03:51.497 TEST_HEADER include/spdk/nvme.h 00:03:51.497 CC test/app/bdev_svc/bdev_svc.o 00:03:51.497 TEST_HEADER include/spdk/nvme_intel.h 00:03:51.497 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:51.497 CC test/dma/test_dma/test_dma.o 00:03:51.497 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:51.497 CC examples/blob/hello_world/hello_blob.o 00:03:51.497 CC examples/bdev/bdevperf/bdevperf.o 00:03:51.497 CC test/lvol/esnap/esnap.o 00:03:51.497 TEST_HEADER include/spdk/nvme_spec.h 00:03:51.497 CC test/env/mem_callbacks/mem_callbacks.o 00:03:51.497 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:51.497 TEST_HEADER include/spdk/nvme_zns.h 00:03:51.497 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:51.497 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:51.497 TEST_HEADER include/spdk/nvmf.h 00:03:51.497 TEST_HEADER include/spdk/nvmf_spec.h 00:03:51.497 TEST_HEADER include/spdk/nvmf_transport.h 00:03:51.497 TEST_HEADER include/spdk/opal.h 00:03:51.497 TEST_HEADER include/spdk/opal_spec.h 00:03:51.497 TEST_HEADER include/spdk/pci_ids.h 00:03:51.497 TEST_HEADER include/spdk/pipe.h 00:03:51.497 TEST_HEADER include/spdk/queue.h 00:03:51.497 TEST_HEADER include/spdk/reduce.h 00:03:51.497 TEST_HEADER include/spdk/rpc.h 00:03:51.497 TEST_HEADER include/spdk/scheduler.h 00:03:51.497 TEST_HEADER include/spdk/scsi.h 00:03:51.497 TEST_HEADER include/spdk/scsi_spec.h 00:03:51.497 TEST_HEADER include/spdk/sock.h 00:03:51.497 TEST_HEADER include/spdk/stdinc.h 00:03:51.497 TEST_HEADER include/spdk/string.h 00:03:51.497 TEST_HEADER include/spdk/thread.h 00:03:51.497 TEST_HEADER include/spdk/trace.h 00:03:51.497 LINK spdk_lspci 00:03:51.497 TEST_HEADER include/spdk/trace_parser.h 00:03:51.497 TEST_HEADER include/spdk/tree.h 00:03:51.497 TEST_HEADER include/spdk/ublk.h 00:03:51.497 TEST_HEADER include/spdk/util.h 00:03:51.497 TEST_HEADER include/spdk/uuid.h 00:03:51.497 TEST_HEADER include/spdk/version.h 00:03:51.497 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:51.497 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:51.497 TEST_HEADER include/spdk/vhost.h 00:03:51.497 TEST_HEADER include/spdk/vmd.h 00:03:51.497 TEST_HEADER include/spdk/xor.h 00:03:51.497 TEST_HEADER include/spdk/zipf.h 00:03:51.763 CXX test/cpp_headers/accel.o 00:03:51.763 LINK rpc_client_test 00:03:51.763 LINK lsvmd 00:03:51.763 LINK spdk_nvme_discover 00:03:51.763 LINK vtophys 00:03:51.763 LINK event_perf 00:03:51.763 LINK poller_perf 00:03:51.763 LINK zipf 00:03:51.763 LINK env_dpdk_post_init 00:03:51.763 LINK interrupt_tgt 00:03:51.763 LINK nvmf_tgt 00:03:51.763 LINK vhost 00:03:51.763 LINK spdk_trace_record 00:03:51.763 LINK iscsi_tgt 00:03:51.763 LINK spdk_tgt 00:03:51.763 LINK ioat_perf 00:03:51.763 LINK bdev_svc 00:03:51.763 LINK verify 00:03:51.763 LINK hello_world 00:03:51.763 LINK mkfs 00:03:51.763 LINK mem_callbacks 00:03:52.027 LINK hello_sock 00:03:52.027 LINK hello_blob 00:03:52.027 LINK hello_bdev 00:03:52.027 LINK thread 00:03:52.027 CXX test/cpp_headers/accel_module.o 00:03:52.027 LINK aer 00:03:52.027 CC test/event/reactor/reactor.o 00:03:52.027 CXX test/cpp_headers/assert.o 00:03:52.027 LINK arbitration 00:03:52.027 LINK nvmf 00:03:52.027 LINK spdk_dd 00:03:52.027 LINK reconnect 00:03:52.027 LINK idxd_perf 00:03:52.027 LINK spdk_trace 00:03:52.027 CC examples/nvme/hotplug/hotplug.o 00:03:52.027 CC examples/vmd/led/led.o 00:03:52.027 CC test/event/reactor_perf/reactor_perf.o 00:03:52.027 LINK pci_ut 00:03:52.286 CC test/event/app_repeat/app_repeat.o 00:03:52.286 CC test/app/histogram_perf/histogram_perf.o 00:03:52.286 CC test/nvme/reset/reset.o 00:03:52.286 CXX test/cpp_headers/barrier.o 00:03:52.286 LINK dif 00:03:52.286 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:52.286 CC test/event/scheduler/scheduler.o 00:03:52.286 LINK test_dma 00:03:52.286 CXX test/cpp_headers/base64.o 00:03:52.286 LINK bdevio 00:03:52.286 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:52.286 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:52.286 CC examples/nvme/abort/abort.o 00:03:52.287 CC examples/blob/cli/blobcli.o 00:03:52.287 CXX test/cpp_headers/bdev.o 00:03:52.287 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:52.287 LINK nvme_fuzz 00:03:52.287 LINK accel_perf 00:03:52.287 CC test/nvme/sgl/sgl.o 00:03:52.287 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:52.287 LINK reactor 00:03:52.287 CC test/app/jsoncat/jsoncat.o 00:03:52.287 LINK nvme_manage 00:03:52.287 CC test/nvme/e2edp/nvme_dp.o 00:03:52.287 CXX test/cpp_headers/bdev_module.o 00:03:52.287 LINK memory_ut 00:03:52.287 CXX test/cpp_headers/bdev_zone.o 00:03:52.287 CC test/nvme/overhead/overhead.o 00:03:52.287 CXX test/cpp_headers/bit_array.o 00:03:52.553 LINK spdk_bdev 00:03:52.553 LINK led 00:03:52.553 LINK reactor_perf 00:03:52.553 LINK spdk_nvme 00:03:52.553 CXX test/cpp_headers/bit_pool.o 00:03:52.553 CC test/app/stub/stub.o 00:03:52.553 LINK histogram_perf 00:03:52.553 CXX test/cpp_headers/blob_bdev.o 00:03:52.553 LINK app_repeat 00:03:52.553 CC test/nvme/err_injection/err_injection.o 00:03:52.553 CC test/nvme/startup/startup.o 00:03:52.553 CC test/nvme/reserve/reserve.o 00:03:52.553 CC test/nvme/simple_copy/simple_copy.o 00:03:52.553 LINK cmb_copy 00:03:52.553 CC test/nvme/connect_stress/connect_stress.o 00:03:52.553 LINK hotplug 00:03:52.553 CXX test/cpp_headers/blobfs_bdev.o 00:03:52.553 LINK jsoncat 00:03:52.553 CXX test/cpp_headers/blobfs.o 00:03:52.553 CC test/nvme/boot_partition/boot_partition.o 00:03:52.553 LINK pmr_persistence 00:03:52.553 CC test/nvme/compliance/nvme_compliance.o 00:03:52.553 CC test/nvme/fused_ordering/fused_ordering.o 00:03:52.553 CXX test/cpp_headers/blob.o 00:03:52.553 LINK scheduler 00:03:52.553 CXX test/cpp_headers/conf.o 00:03:52.553 CXX test/cpp_headers/config.o 00:03:52.817 CXX test/cpp_headers/cpuset.o 00:03:52.817 CXX test/cpp_headers/crc16.o 00:03:52.817 CXX test/cpp_headers/crc32.o 00:03:52.817 CXX test/cpp_headers/crc64.o 00:03:52.817 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:52.817 LINK reset 00:03:52.817 CC test/nvme/fdp/fdp.o 00:03:52.817 CC test/nvme/cuse/cuse.o 00:03:52.817 CXX test/cpp_headers/dif.o 00:03:52.817 CXX test/cpp_headers/dma.o 00:03:52.817 CXX test/cpp_headers/endian.o 00:03:52.817 CXX test/cpp_headers/env_dpdk.o 00:03:52.817 CXX test/cpp_headers/env.o 00:03:52.817 CXX test/cpp_headers/event.o 00:03:52.817 CXX test/cpp_headers/fd_group.o 00:03:52.817 LINK sgl 00:03:52.817 CXX test/cpp_headers/fd.o 00:03:52.817 LINK stub 00:03:52.817 LINK spdk_nvme_perf 00:03:52.817 LINK startup 00:03:52.817 CXX test/cpp_headers/file.o 00:03:52.817 CXX test/cpp_headers/ftl.o 00:03:52.817 LINK err_injection 00:03:52.817 LINK nvme_dp 00:03:52.817 CXX test/cpp_headers/gpt_spec.o 00:03:52.817 LINK spdk_nvme_identify 00:03:52.817 CXX test/cpp_headers/hexlify.o 00:03:52.817 LINK bdevperf 00:03:52.817 CXX test/cpp_headers/histogram_data.o 00:03:52.817 LINK overhead 00:03:52.817 LINK connect_stress 00:03:53.076 LINK reserve 00:03:53.076 CXX test/cpp_headers/idxd.o 00:03:53.076 CXX test/cpp_headers/idxd_spec.o 00:03:53.076 LINK spdk_top 00:03:53.076 LINK abort 00:03:53.076 LINK boot_partition 00:03:53.076 LINK simple_copy 00:03:53.076 CXX test/cpp_headers/init.o 00:03:53.076 CXX test/cpp_headers/ioat.o 00:03:53.076 CXX test/cpp_headers/ioat_spec.o 00:03:53.076 CXX test/cpp_headers/iscsi_spec.o 00:03:53.076 CXX test/cpp_headers/json.o 00:03:53.076 CXX test/cpp_headers/jsonrpc.o 00:03:53.076 CXX test/cpp_headers/likely.o 00:03:53.076 CXX test/cpp_headers/log.o 00:03:53.076 LINK fused_ordering 00:03:53.076 LINK vhost_fuzz 00:03:53.076 CXX test/cpp_headers/lvol.o 00:03:53.076 CXX test/cpp_headers/memory.o 00:03:53.076 LINK doorbell_aers 00:03:53.076 CXX test/cpp_headers/mmio.o 00:03:53.076 CXX test/cpp_headers/nbd.o 00:03:53.076 CXX test/cpp_headers/notify.o 00:03:53.076 CXX test/cpp_headers/nvme.o 00:03:53.076 CXX test/cpp_headers/nvme_intel.o 00:03:53.076 CXX test/cpp_headers/nvme_ocssd.o 00:03:53.076 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:53.076 CXX test/cpp_headers/nvme_spec.o 00:03:53.076 CXX test/cpp_headers/nvme_zns.o 00:03:53.076 CXX test/cpp_headers/nvmf_cmd.o 00:03:53.076 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:53.338 CXX test/cpp_headers/nvmf.o 00:03:53.338 CXX test/cpp_headers/nvmf_spec.o 00:03:53.338 CXX test/cpp_headers/nvmf_transport.o 00:03:53.338 CXX test/cpp_headers/opal.o 00:03:53.338 LINK blobcli 00:03:53.338 CXX test/cpp_headers/opal_spec.o 00:03:53.338 CXX test/cpp_headers/pci_ids.o 00:03:53.338 CXX test/cpp_headers/pipe.o 00:03:53.338 CXX test/cpp_headers/queue.o 00:03:53.338 CXX test/cpp_headers/reduce.o 00:03:53.338 CXX test/cpp_headers/rpc.o 00:03:53.338 LINK nvme_compliance 00:03:53.338 CXX test/cpp_headers/scheduler.o 00:03:53.338 CXX test/cpp_headers/scsi.o 00:03:53.338 CXX test/cpp_headers/scsi_spec.o 00:03:53.338 CXX test/cpp_headers/sock.o 00:03:53.338 CXX test/cpp_headers/stdinc.o 00:03:53.338 CXX test/cpp_headers/string.o 00:03:53.338 CXX test/cpp_headers/thread.o 00:03:53.338 LINK fdp 00:03:53.338 CXX test/cpp_headers/trace.o 00:03:53.338 CXX test/cpp_headers/trace_parser.o 00:03:53.338 CXX test/cpp_headers/tree.o 00:03:53.338 CXX test/cpp_headers/ublk.o 00:03:53.338 CXX test/cpp_headers/util.o 00:03:53.338 CXX test/cpp_headers/uuid.o 00:03:53.338 CXX test/cpp_headers/version.o 00:03:53.338 CXX test/cpp_headers/vfio_user_pci.o 00:03:53.338 CXX test/cpp_headers/vfio_user_spec.o 00:03:53.338 CXX test/cpp_headers/vhost.o 00:03:53.338 CXX test/cpp_headers/vmd.o 00:03:53.338 CXX test/cpp_headers/xor.o 00:03:53.338 CXX test/cpp_headers/zipf.o 00:03:54.268 LINK cuse 00:03:54.525 LINK iscsi_fuzz 00:03:57.063 LINK esnap 00:03:57.063 00:03:57.063 real 0m37.857s 00:03:57.063 user 7m13.766s 00:03:57.063 sys 1m36.687s 00:03:57.063 10:34:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:57.063 10:34:13 -- common/autotest_common.sh@10 -- $ set +x 00:03:57.063 ************************************ 00:03:57.063 END TEST make 00:03:57.063 ************************************ 00:03:57.320 10:34:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.320 10:34:13 -- nvmf/common.sh@7 -- # uname -s 00:03:57.320 10:34:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.320 10:34:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.320 10:34:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.320 10:34:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.320 10:34:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.320 10:34:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.320 10:34:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.320 10:34:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.320 10:34:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.320 10:34:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.320 10:34:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.320 10:34:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.321 10:34:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.321 10:34:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.321 10:34:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:57.321 10:34:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.321 10:34:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.321 10:34:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.321 10:34:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.321 10:34:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.321 10:34:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.321 10:34:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.321 10:34:13 -- paths/export.sh@5 -- # export PATH 00:03:57.321 10:34:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.321 10:34:13 -- nvmf/common.sh@46 -- # : 0 00:03:57.321 10:34:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:57.321 10:34:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:57.321 10:34:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:57.321 10:34:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.321 10:34:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.321 10:34:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:57.321 10:34:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:57.321 10:34:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:57.321 10:34:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:57.321 10:34:13 -- spdk/autotest.sh@32 -- # uname -s 00:03:57.321 10:34:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:57.321 10:34:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:57.321 10:34:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.321 10:34:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:57.321 10:34:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.321 10:34:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:57.321 10:34:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:57.321 10:34:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:57.321 10:34:13 -- spdk/autotest.sh@48 -- # udevadm_pid=3302656 00:03:57.321 10:34:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:57.321 10:34:13 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:57.321 10:34:13 -- spdk/autotest.sh@54 -- # echo 3302658 00:03:57.321 10:34:13 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:57.321 10:34:13 -- spdk/autotest.sh@56 -- # echo 3302659 00:03:57.321 10:34:13 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:57.321 10:34:13 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:57.321 10:34:13 -- spdk/autotest.sh@60 -- # echo 3302660 00:03:57.321 10:34:13 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:57.321 10:34:13 -- spdk/autotest.sh@62 -- # echo 3302661 00:03:57.321 10:34:13 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:57.321 10:34:13 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:57.321 10:34:13 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:57.321 10:34:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:57.321 10:34:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.321 10:34:13 -- spdk/autotest.sh@70 -- # create_test_list 00:03:57.321 10:34:13 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:57.321 10:34:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.321 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:57.321 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:57.321 10:34:13 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:57.321 10:34:13 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:57.321 10:34:13 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:57.321 10:34:13 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:57.321 10:34:13 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:57.321 10:34:13 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:57.321 10:34:13 -- common/autotest_common.sh@1440 -- # uname 00:03:57.321 10:34:13 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:57.321 10:34:13 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:57.321 10:34:13 -- common/autotest_common.sh@1460 -- # uname 00:03:57.321 10:34:13 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:57.321 10:34:13 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:57.321 10:34:13 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:57.321 10:34:14 -- spdk/autotest.sh@83 -- # hash lcov 00:03:57.321 10:34:14 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:57.321 10:34:14 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:57.321 --rc lcov_branch_coverage=1 00:03:57.321 --rc lcov_function_coverage=1 00:03:57.321 --rc genhtml_branch_coverage=1 00:03:57.321 --rc genhtml_function_coverage=1 00:03:57.321 --rc genhtml_legend=1 00:03:57.321 --rc geninfo_all_blocks=1 00:03:57.321 ' 00:03:57.321 10:34:14 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:57.321 --rc lcov_branch_coverage=1 00:03:57.321 --rc lcov_function_coverage=1 00:03:57.321 --rc genhtml_branch_coverage=1 00:03:57.321 --rc genhtml_function_coverage=1 00:03:57.321 --rc genhtml_legend=1 00:03:57.321 --rc geninfo_all_blocks=1 00:03:57.321 ' 00:03:57.321 10:34:14 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:57.321 --rc lcov_branch_coverage=1 00:03:57.321 --rc lcov_function_coverage=1 00:03:57.321 --rc genhtml_branch_coverage=1 00:03:57.321 --rc genhtml_function_coverage=1 00:03:57.321 --rc genhtml_legend=1 00:03:57.321 --rc geninfo_all_blocks=1 00:03:57.321 --no-external' 00:03:57.321 10:34:14 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:57.321 --rc lcov_branch_coverage=1 00:03:57.321 --rc lcov_function_coverage=1 00:03:57.321 --rc genhtml_branch_coverage=1 00:03:57.321 --rc genhtml_function_coverage=1 00:03:57.321 --rc genhtml_legend=1 00:03:57.321 --rc geninfo_all_blocks=1 00:03:57.321 --no-external' 00:03:57.321 10:34:14 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:57.321 lcov: LCOV version 1.14 00:03:57.321 10:34:14 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:59.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:59.218 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:59.219 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:59.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:59.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:59.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:17.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:17.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:17.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:17.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:17.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:17.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:39.206 10:34:52 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:39.206 10:34:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:39.206 10:34:52 -- common/autotest_common.sh@10 -- # set +x 00:04:39.206 10:34:52 -- spdk/autotest.sh@102 -- # rm -f 00:04:39.206 10:34:52 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.206 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:39.206 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:39.206 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:39.206 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:39.206 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:39.206 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:39.206 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:39.206 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:39.206 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:39.206 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:39.206 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:39.206 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:39.206 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:39.206 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:39.206 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:39.206 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:39.206 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:39.206 10:34:54 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:39.206 10:34:54 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:39.206 10:34:54 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:39.206 10:34:54 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:39.206 10:34:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:39.206 10:34:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:39.206 10:34:54 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:39.206 10:34:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.206 10:34:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:39.206 10:34:54 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:39.206 10:34:54 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:39.206 10:34:54 -- spdk/autotest.sh@121 -- # grep -v p 00:04:39.206 10:34:54 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:39.206 10:34:54 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:39.206 10:34:54 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:39.206 10:34:54 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:39.206 10:34:54 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:39.206 No valid GPT data, bailing 00:04:39.206 10:34:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.206 10:34:54 -- scripts/common.sh@393 -- # pt= 00:04:39.206 10:34:54 -- scripts/common.sh@394 -- # return 1 00:04:39.206 10:34:54 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:39.206 1+0 records in 00:04:39.206 1+0 records out 00:04:39.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0024832 s, 422 MB/s 00:04:39.206 10:34:54 -- spdk/autotest.sh@129 -- # sync 00:04:39.206 10:34:54 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:39.206 10:34:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:39.206 10:34:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:39.774 10:34:56 -- spdk/autotest.sh@135 -- # uname -s 00:04:39.774 10:34:56 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:39.774 10:34:56 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:39.774 10:34:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.774 10:34:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.774 10:34:56 -- common/autotest_common.sh@10 -- # set +x 00:04:39.774 ************************************ 00:04:39.774 START TEST setup.sh 00:04:39.774 ************************************ 00:04:39.774 10:34:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:39.774 * Looking for test storage... 00:04:39.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.774 10:34:56 -- setup/test-setup.sh@10 -- # uname -s 00:04:39.774 10:34:56 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:39.774 10:34:56 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:39.774 10:34:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.774 10:34:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.774 10:34:56 -- common/autotest_common.sh@10 -- # set +x 00:04:39.774 ************************************ 00:04:39.774 START TEST acl 00:04:39.774 ************************************ 00:04:39.774 10:34:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:39.774 * Looking for test storage... 00:04:39.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.774 10:34:56 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:39.774 10:34:56 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:39.774 10:34:56 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:39.774 10:34:56 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:39.774 10:34:56 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:39.774 10:34:56 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:39.774 10:34:56 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:39.774 10:34:56 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.774 10:34:56 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:39.774 10:34:56 -- setup/acl.sh@12 -- # devs=() 00:04:39.774 10:34:56 -- setup/acl.sh@12 -- # declare -a devs 00:04:39.774 10:34:56 -- setup/acl.sh@13 -- # drivers=() 00:04:39.774 10:34:56 -- setup/acl.sh@13 -- # declare -A drivers 00:04:39.774 10:34:56 -- setup/acl.sh@51 -- # setup reset 00:04:39.774 10:34:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.774 10:34:56 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.148 10:34:57 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:41.148 10:34:57 -- setup/acl.sh@16 -- # local dev driver 00:04:41.148 10:34:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.148 10:34:57 -- setup/acl.sh@15 -- # setup output status 00:04:41.148 10:34:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.148 10:34:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:42.522 Hugepages 00:04:42.522 node hugesize free / total 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 00:04:42.522 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.522 10:34:58 -- setup/acl.sh@20 -- # continue 00:04:42.522 10:34:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.522 10:34:59 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:42.522 10:34:59 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:42.523 10:34:59 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:42.523 10:34:59 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:42.523 10:34:59 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:42.523 10:34:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.523 10:34:59 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:42.523 10:34:59 -- setup/acl.sh@54 -- # run_test denied denied 00:04:42.523 10:34:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.523 10:34:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.523 10:34:59 -- common/autotest_common.sh@10 -- # set +x 00:04:42.523 ************************************ 00:04:42.523 START TEST denied 00:04:42.523 ************************************ 00:04:42.523 10:34:59 -- common/autotest_common.sh@1104 -- # denied 00:04:42.523 10:34:59 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:42.523 10:34:59 -- setup/acl.sh@38 -- # setup output config 00:04:42.523 10:34:59 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:42.523 10:34:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.523 10:34:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:43.896 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:43.896 10:35:00 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:43.896 10:35:00 -- setup/acl.sh@28 -- # local dev driver 00:04:43.896 10:35:00 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:43.896 10:35:00 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:43.896 10:35:00 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:43.896 10:35:00 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:43.896 10:35:00 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:43.896 10:35:00 -- setup/acl.sh@41 -- # setup reset 00:04:43.896 10:35:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.896 10:35:00 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.420 00:04:46.420 real 0m3.874s 00:04:46.420 user 0m1.145s 00:04:46.420 sys 0m1.812s 00:04:46.420 10:35:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.420 10:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.420 ************************************ 00:04:46.420 END TEST denied 00:04:46.420 ************************************ 00:04:46.420 10:35:02 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:46.420 10:35:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.420 10:35:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.420 10:35:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.420 ************************************ 00:04:46.420 START TEST allowed 00:04:46.420 ************************************ 00:04:46.420 10:35:02 -- common/autotest_common.sh@1104 -- # allowed 00:04:46.420 10:35:02 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:46.420 10:35:02 -- setup/acl.sh@45 -- # setup output config 00:04:46.420 10:35:02 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:46.420 10:35:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.420 10:35:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:48.945 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:48.945 10:35:05 -- setup/acl.sh@47 -- # verify 00:04:48.945 10:35:05 -- setup/acl.sh@28 -- # local dev driver 00:04:48.945 10:35:05 -- setup/acl.sh@48 -- # setup reset 00:04:48.945 10:35:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.945 10:35:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.318 00:04:50.318 real 0m3.962s 00:04:50.318 user 0m1.077s 00:04:50.318 sys 0m1.742s 00:04:50.318 10:35:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.318 10:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:50.318 ************************************ 00:04:50.318 END TEST allowed 00:04:50.318 ************************************ 00:04:50.318 00:04:50.318 real 0m10.536s 00:04:50.318 user 0m3.322s 00:04:50.318 sys 0m5.237s 00:04:50.318 10:35:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.318 10:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:50.318 ************************************ 00:04:50.318 END TEST acl 00:04:50.318 ************************************ 00:04:50.318 10:35:06 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:50.318 10:35:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.318 10:35:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.318 10:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:50.318 ************************************ 00:04:50.318 START TEST hugepages 00:04:50.318 ************************************ 00:04:50.318 10:35:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:50.318 * Looking for test storage... 00:04:50.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:50.318 10:35:06 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:50.318 10:35:06 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:50.318 10:35:06 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:50.318 10:35:06 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:50.318 10:35:06 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:50.319 10:35:06 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:50.319 10:35:06 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:50.319 10:35:06 -- setup/common.sh@18 -- # local node= 00:04:50.319 10:35:06 -- setup/common.sh@19 -- # local var val 00:04:50.319 10:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.319 10:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.319 10:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.319 10:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.319 10:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.319 10:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41657064 kB' 'MemAvailable: 45213820 kB' 'Buffers: 2704 kB' 'Cached: 12198124 kB' 'SwapCached: 0 kB' 'Active: 9289292 kB' 'Inactive: 3518404 kB' 'Active(anon): 8858792 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610556 kB' 'Mapped: 160536 kB' 'Shmem: 8251924 kB' 'KReclaimable: 202948 kB' 'Slab: 579076 kB' 'SReclaimable: 202948 kB' 'SUnreclaim: 376128 kB' 'KernelStack: 12928 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562320 kB' 'Committed_AS: 9989616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:06 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.319 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.319 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # continue 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.320 10:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.320 10:35:07 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.320 10:35:07 -- setup/common.sh@33 -- # echo 2048 00:04:50.320 10:35:07 -- setup/common.sh@33 -- # return 0 00:04:50.320 10:35:07 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:50.320 10:35:07 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:50.320 10:35:07 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:50.320 10:35:07 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:50.320 10:35:07 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:50.320 10:35:07 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:50.320 10:35:07 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:50.320 10:35:07 -- setup/hugepages.sh@207 -- # get_nodes 00:04:50.320 10:35:07 -- setup/hugepages.sh@27 -- # local node 00:04:50.320 10:35:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.320 10:35:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:50.320 10:35:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.320 10:35:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:50.320 10:35:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.320 10:35:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.320 10:35:07 -- setup/hugepages.sh@208 -- # clear_hp 00:04:50.320 10:35:07 -- setup/hugepages.sh@37 -- # local node hp 00:04:50.320 10:35:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.320 10:35:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.320 10:35:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.320 10:35:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.320 10:35:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.320 10:35:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.320 10:35:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.320 10:35:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.320 10:35:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.320 10:35:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.320 10:35:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:50.320 10:35:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:50.320 10:35:07 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:50.320 10:35:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.320 10:35:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.320 10:35:07 -- common/autotest_common.sh@10 -- # set +x 00:04:50.320 ************************************ 00:04:50.320 START TEST default_setup 00:04:50.320 ************************************ 00:04:50.320 10:35:07 -- common/autotest_common.sh@1104 -- # default_setup 00:04:50.320 10:35:07 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:50.320 10:35:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:50.320 10:35:07 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:50.320 10:35:07 -- setup/hugepages.sh@51 -- # shift 00:04:50.320 10:35:07 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:50.320 10:35:07 -- setup/hugepages.sh@52 -- # local node_ids 00:04:50.320 10:35:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.320 10:35:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:50.320 10:35:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:50.320 10:35:07 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:50.320 10:35:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.320 10:35:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.320 10:35:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.320 10:35:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.320 10:35:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.320 10:35:07 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:50.320 10:35:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:50.320 10:35:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:50.320 10:35:07 -- setup/hugepages.sh@73 -- # return 0 00:04:50.320 10:35:07 -- setup/hugepages.sh@137 -- # setup output 00:04:50.320 10:35:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.320 10:35:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.697 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:51.697 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:51.697 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:51.697 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:51.697 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:51.697 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:51.697 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:51.697 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:51.697 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:51.697 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:51.697 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:51.697 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:51.697 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:51.697 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:51.697 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:51.697 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:52.637 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:52.637 10:35:09 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:52.637 10:35:09 -- setup/hugepages.sh@89 -- # local node 00:04:52.637 10:35:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.637 10:35:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.637 10:35:09 -- setup/hugepages.sh@92 -- # local surp 00:04:52.637 10:35:09 -- setup/hugepages.sh@93 -- # local resv 00:04:52.637 10:35:09 -- setup/hugepages.sh@94 -- # local anon 00:04:52.637 10:35:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.637 10:35:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.637 10:35:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.637 10:35:09 -- setup/common.sh@18 -- # local node= 00:04:52.637 10:35:09 -- setup/common.sh@19 -- # local var val 00:04:52.637 10:35:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.637 10:35:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.637 10:35:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.637 10:35:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.637 10:35:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.637 10:35:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43763928 kB' 'MemAvailable: 47320668 kB' 'Buffers: 2704 kB' 'Cached: 12198216 kB' 'SwapCached: 0 kB' 'Active: 9307072 kB' 'Inactive: 3518404 kB' 'Active(anon): 8876572 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627936 kB' 'Mapped: 160604 kB' 'Shmem: 8252016 kB' 'KReclaimable: 202916 kB' 'Slab: 578760 kB' 'SReclaimable: 202916 kB' 'SUnreclaim: 375844 kB' 'KernelStack: 12800 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10006632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.637 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.637 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.638 10:35:09 -- setup/common.sh@33 -- # echo 0 00:04:52.638 10:35:09 -- setup/common.sh@33 -- # return 0 00:04:52.638 10:35:09 -- setup/hugepages.sh@97 -- # anon=0 00:04:52.638 10:35:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.638 10:35:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.638 10:35:09 -- setup/common.sh@18 -- # local node= 00:04:52.638 10:35:09 -- setup/common.sh@19 -- # local var val 00:04:52.638 10:35:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.638 10:35:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.638 10:35:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.638 10:35:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.638 10:35:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.638 10:35:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43770468 kB' 'MemAvailable: 47327208 kB' 'Buffers: 2704 kB' 'Cached: 12198216 kB' 'SwapCached: 0 kB' 'Active: 9306924 kB' 'Inactive: 3518404 kB' 'Active(anon): 8876424 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627808 kB' 'Mapped: 160604 kB' 'Shmem: 8252016 kB' 'KReclaimable: 202916 kB' 'Slab: 579024 kB' 'SReclaimable: 202916 kB' 'SUnreclaim: 376108 kB' 'KernelStack: 12832 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10006644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.638 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.638 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.639 10:35:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.639 10:35:09 -- setup/common.sh@33 -- # echo 0 00:04:52.639 10:35:09 -- setup/common.sh@33 -- # return 0 00:04:52.639 10:35:09 -- setup/hugepages.sh@99 -- # surp=0 00:04:52.639 10:35:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.639 10:35:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.639 10:35:09 -- setup/common.sh@18 -- # local node= 00:04:52.639 10:35:09 -- setup/common.sh@19 -- # local var val 00:04:52.639 10:35:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.639 10:35:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.639 10:35:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.639 10:35:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.639 10:35:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.639 10:35:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.639 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43776236 kB' 'MemAvailable: 47332976 kB' 'Buffers: 2704 kB' 'Cached: 12198228 kB' 'SwapCached: 0 kB' 'Active: 9306152 kB' 'Inactive: 3518404 kB' 'Active(anon): 8875652 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627044 kB' 'Mapped: 160588 kB' 'Shmem: 8252028 kB' 'KReclaimable: 202916 kB' 'Slab: 579004 kB' 'SReclaimable: 202916 kB' 'SUnreclaim: 376088 kB' 'KernelStack: 12864 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10006660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.640 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.640 10:35:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.641 10:35:09 -- setup/common.sh@33 -- # echo 0 00:04:52.641 10:35:09 -- setup/common.sh@33 -- # return 0 00:04:52.641 10:35:09 -- setup/hugepages.sh@100 -- # resv=0 00:04:52.641 10:35:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.641 nr_hugepages=1024 00:04:52.641 10:35:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.641 resv_hugepages=0 00:04:52.641 10:35:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.641 surplus_hugepages=0 00:04:52.641 10:35:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.641 anon_hugepages=0 00:04:52.641 10:35:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.641 10:35:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.641 10:35:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.641 10:35:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.641 10:35:09 -- setup/common.sh@18 -- # local node= 00:04:52.641 10:35:09 -- setup/common.sh@19 -- # local var val 00:04:52.641 10:35:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.641 10:35:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.641 10:35:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.641 10:35:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.641 10:35:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.641 10:35:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43775480 kB' 'MemAvailable: 47332220 kB' 'Buffers: 2704 kB' 'Cached: 12198244 kB' 'SwapCached: 0 kB' 'Active: 9306188 kB' 'Inactive: 3518404 kB' 'Active(anon): 8875688 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627044 kB' 'Mapped: 160588 kB' 'Shmem: 8252044 kB' 'KReclaimable: 202916 kB' 'Slab: 579004 kB' 'SReclaimable: 202916 kB' 'SUnreclaim: 376088 kB' 'KernelStack: 12864 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10006672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.641 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.641 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.642 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.642 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.643 10:35:09 -- setup/common.sh@33 -- # echo 1024 00:04:52.643 10:35:09 -- setup/common.sh@33 -- # return 0 00:04:52.643 10:35:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.643 10:35:09 -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.643 10:35:09 -- setup/hugepages.sh@27 -- # local node 00:04:52.643 10:35:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.643 10:35:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.643 10:35:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.643 10:35:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:52.643 10:35:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.643 10:35:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.643 10:35:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.643 10:35:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.643 10:35:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.643 10:35:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.643 10:35:09 -- setup/common.sh@18 -- # local node=0 00:04:52.643 10:35:09 -- setup/common.sh@19 -- # local var val 00:04:52.643 10:35:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.643 10:35:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.643 10:35:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.643 10:35:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.643 10:35:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.643 10:35:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19493404 kB' 'MemUsed: 13336480 kB' 'SwapCached: 0 kB' 'Active: 7000484 kB' 'Inactive: 3242024 kB' 'Active(anon): 6887416 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9883448 kB' 'Mapped: 42596 kB' 'AnonPages: 362332 kB' 'Shmem: 6528356 kB' 'KernelStack: 7176 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102116 kB' 'Slab: 327224 kB' 'SReclaimable: 102116 kB' 'SUnreclaim: 225108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.643 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.643 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # continue 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.644 10:35:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.644 10:35:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.644 10:35:09 -- setup/common.sh@33 -- # echo 0 00:04:52.644 10:35:09 -- setup/common.sh@33 -- # return 0 00:04:52.644 10:35:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.644 10:35:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.644 10:35:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.644 10:35:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.644 10:35:09 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.644 node0=1024 expecting 1024 00:04:52.644 10:35:09 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.644 00:04:52.644 real 0m2.414s 00:04:52.644 user 0m0.674s 00:04:52.644 sys 0m0.866s 00:04:52.644 10:35:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.644 10:35:09 -- common/autotest_common.sh@10 -- # set +x 00:04:52.644 ************************************ 00:04:52.644 END TEST default_setup 00:04:52.644 ************************************ 00:04:52.903 10:35:09 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:52.903 10:35:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.903 10:35:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.903 10:35:09 -- common/autotest_common.sh@10 -- # set +x 00:04:52.903 ************************************ 00:04:52.903 START TEST per_node_1G_alloc 00:04:52.903 ************************************ 00:04:52.903 10:35:09 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:52.903 10:35:09 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:52.903 10:35:09 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:52.903 10:35:09 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:52.903 10:35:09 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:52.903 10:35:09 -- setup/hugepages.sh@51 -- # shift 00:04:52.903 10:35:09 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:52.903 10:35:09 -- setup/hugepages.sh@52 -- # local node_ids 00:04:52.903 10:35:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.903 10:35:09 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:52.903 10:35:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:52.903 10:35:09 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:52.903 10:35:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.903 10:35:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:52.903 10:35:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.903 10:35:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.903 10:35:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.903 10:35:09 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:52.903 10:35:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:52.903 10:35:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:52.903 10:35:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:52.903 10:35:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:52.903 10:35:09 -- setup/hugepages.sh@73 -- # return 0 00:04:52.903 10:35:09 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:52.903 10:35:09 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:52.903 10:35:09 -- setup/hugepages.sh@146 -- # setup output 00:04:52.903 10:35:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.903 10:35:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.838 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.839 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:53.839 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.839 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.839 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.839 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.839 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.839 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.839 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.839 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.839 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.839 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.839 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.839 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.839 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.839 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.839 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.101 10:35:10 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:54.101 10:35:10 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:54.101 10:35:10 -- setup/hugepages.sh@89 -- # local node 00:04:54.101 10:35:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.101 10:35:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.101 10:35:10 -- setup/hugepages.sh@92 -- # local surp 00:04:54.101 10:35:10 -- setup/hugepages.sh@93 -- # local resv 00:04:54.101 10:35:10 -- setup/hugepages.sh@94 -- # local anon 00:04:54.101 10:35:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.101 10:35:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.101 10:35:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.101 10:35:10 -- setup/common.sh@18 -- # local node= 00:04:54.101 10:35:10 -- setup/common.sh@19 -- # local var val 00:04:54.101 10:35:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.101 10:35:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.101 10:35:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.101 10:35:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.101 10:35:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.101 10:35:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43769108 kB' 'MemAvailable: 47325848 kB' 'Buffers: 2704 kB' 'Cached: 12198292 kB' 'SwapCached: 0 kB' 'Active: 9307648 kB' 'Inactive: 3518404 kB' 'Active(anon): 8877148 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628364 kB' 'Mapped: 160532 kB' 'Shmem: 8252092 kB' 'KReclaimable: 202916 kB' 'Slab: 578712 kB' 'SReclaimable: 202916 kB' 'SUnreclaim: 375796 kB' 'KernelStack: 12864 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10006840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.101 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.101 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.102 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.102 10:35:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.102 10:35:10 -- setup/common.sh@33 -- # echo 0 00:04:54.102 10:35:10 -- setup/common.sh@33 -- # return 0 00:04:54.102 10:35:10 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.102 10:35:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.102 10:35:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.102 10:35:10 -- setup/common.sh@18 -- # local node= 00:04:54.102 10:35:10 -- setup/common.sh@19 -- # local var val 00:04:54.102 10:35:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.102 10:35:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.102 10:35:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.102 10:35:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.103 10:35:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.103 10:35:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43776212 kB' 'MemAvailable: 47332920 kB' 'Buffers: 2704 kB' 'Cached: 12198296 kB' 'SwapCached: 0 kB' 'Active: 9307832 kB' 'Inactive: 3518404 kB' 'Active(anon): 8877332 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628520 kB' 'Mapped: 160532 kB' 'Shmem: 8252096 kB' 'KReclaimable: 202852 kB' 'Slab: 578600 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375748 kB' 'KernelStack: 12848 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10006852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.103 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.103 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.104 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.104 10:35:10 -- setup/common.sh@33 -- # echo 0 00:04:54.104 10:35:10 -- setup/common.sh@33 -- # return 0 00:04:54.104 10:35:10 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.104 10:35:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.104 10:35:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.104 10:35:10 -- setup/common.sh@18 -- # local node= 00:04:54.104 10:35:10 -- setup/common.sh@19 -- # local var val 00:04:54.104 10:35:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.104 10:35:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.104 10:35:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.104 10:35:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.104 10:35:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.104 10:35:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.104 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43776864 kB' 'MemAvailable: 47333572 kB' 'Buffers: 2704 kB' 'Cached: 12198308 kB' 'SwapCached: 0 kB' 'Active: 9307436 kB' 'Inactive: 3518404 kB' 'Active(anon): 8876936 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628148 kB' 'Mapped: 160528 kB' 'Shmem: 8252108 kB' 'KReclaimable: 202852 kB' 'Slab: 578600 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375748 kB' 'KernelStack: 12912 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10006864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.105 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.105 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.106 10:35:10 -- setup/common.sh@33 -- # echo 0 00:04:54.106 10:35:10 -- setup/common.sh@33 -- # return 0 00:04:54.106 10:35:10 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.106 10:35:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.106 nr_hugepages=1024 00:04:54.106 10:35:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.106 resv_hugepages=0 00:04:54.106 10:35:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.106 surplus_hugepages=0 00:04:54.106 10:35:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.106 anon_hugepages=0 00:04:54.106 10:35:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.106 10:35:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.106 10:35:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.106 10:35:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.106 10:35:10 -- setup/common.sh@18 -- # local node= 00:04:54.106 10:35:10 -- setup/common.sh@19 -- # local var val 00:04:54.106 10:35:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.106 10:35:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.106 10:35:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.106 10:35:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.106 10:35:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.106 10:35:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43778808 kB' 'MemAvailable: 47335516 kB' 'Buffers: 2704 kB' 'Cached: 12198324 kB' 'SwapCached: 0 kB' 'Active: 9307592 kB' 'Inactive: 3518404 kB' 'Active(anon): 8877092 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628296 kB' 'Mapped: 160528 kB' 'Shmem: 8252124 kB' 'KReclaimable: 202852 kB' 'Slab: 578696 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375844 kB' 'KernelStack: 12944 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10006512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.106 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.106 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.107 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.107 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.108 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.108 10:35:10 -- setup/common.sh@33 -- # echo 1024 00:04:54.108 10:35:10 -- setup/common.sh@33 -- # return 0 00:04:54.108 10:35:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.108 10:35:10 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.108 10:35:10 -- setup/hugepages.sh@27 -- # local node 00:04:54.108 10:35:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.108 10:35:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.108 10:35:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.108 10:35:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.108 10:35:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.108 10:35:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.108 10:35:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.108 10:35:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.108 10:35:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.108 10:35:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.108 10:35:10 -- setup/common.sh@18 -- # local node=0 00:04:54.108 10:35:10 -- setup/common.sh@19 -- # local var val 00:04:54.108 10:35:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.108 10:35:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.108 10:35:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.108 10:35:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.108 10:35:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.108 10:35:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.108 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20551644 kB' 'MemUsed: 12278240 kB' 'SwapCached: 0 kB' 'Active: 7000620 kB' 'Inactive: 3242024 kB' 'Active(anon): 6887552 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9883504 kB' 'Mapped: 42536 kB' 'AnonPages: 362348 kB' 'Shmem: 6528412 kB' 'KernelStack: 7176 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102116 kB' 'Slab: 326968 kB' 'SReclaimable: 102116 kB' 'SUnreclaim: 224852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.109 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.109 10:35:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@33 -- # echo 0 00:04:54.110 10:35:10 -- setup/common.sh@33 -- # return 0 00:04:54.110 10:35:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.110 10:35:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.110 10:35:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.110 10:35:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:54.110 10:35:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.110 10:35:10 -- setup/common.sh@18 -- # local node=1 00:04:54.110 10:35:10 -- setup/common.sh@19 -- # local var val 00:04:54.110 10:35:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.110 10:35:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.110 10:35:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:54.110 10:35:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:54.110 10:35:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.110 10:35:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 23227512 kB' 'MemUsed: 4484340 kB' 'SwapCached: 0 kB' 'Active: 2306456 kB' 'Inactive: 276380 kB' 'Active(anon): 1989024 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 276380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2317524 kB' 'Mapped: 117992 kB' 'AnonPages: 265404 kB' 'Shmem: 1723712 kB' 'KernelStack: 5688 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100736 kB' 'Slab: 251728 kB' 'SReclaimable: 100736 kB' 'SUnreclaim: 150992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.110 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.110 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # continue 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.111 10:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.111 10:35:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.111 10:35:10 -- setup/common.sh@33 -- # echo 0 00:04:54.111 10:35:10 -- setup/common.sh@33 -- # return 0 00:04:54.111 10:35:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.111 10:35:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.111 10:35:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.111 10:35:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.111 10:35:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:54.111 node0=512 expecting 512 00:04:54.111 10:35:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.111 10:35:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.111 10:35:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.111 10:35:10 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:54.111 node1=512 expecting 512 00:04:54.111 10:35:10 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:54.111 00:04:54.111 real 0m1.367s 00:04:54.111 user 0m0.542s 00:04:54.111 sys 0m0.789s 00:04:54.111 10:35:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.111 10:35:10 -- common/autotest_common.sh@10 -- # set +x 00:04:54.111 ************************************ 00:04:54.111 END TEST per_node_1G_alloc 00:04:54.111 ************************************ 00:04:54.111 10:35:10 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:54.111 10:35:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.111 10:35:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.111 10:35:10 -- common/autotest_common.sh@10 -- # set +x 00:04:54.111 ************************************ 00:04:54.111 START TEST even_2G_alloc 00:04:54.111 ************************************ 00:04:54.111 10:35:10 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:54.111 10:35:10 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:54.111 10:35:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.111 10:35:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.111 10:35:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.111 10:35:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.111 10:35:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.111 10:35:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.111 10:35:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.111 10:35:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.111 10:35:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.111 10:35:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.111 10:35:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.111 10:35:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.111 10:35:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.111 10:35:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.111 10:35:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:54.111 10:35:10 -- setup/hugepages.sh@83 -- # : 512 00:04:54.111 10:35:10 -- setup/hugepages.sh@84 -- # : 1 00:04:54.111 10:35:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.111 10:35:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:54.111 10:35:10 -- setup/hugepages.sh@83 -- # : 0 00:04:54.111 10:35:10 -- setup/hugepages.sh@84 -- # : 0 00:04:54.112 10:35:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.112 10:35:10 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:54.112 10:35:10 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:54.112 10:35:10 -- setup/hugepages.sh@153 -- # setup output 00:04:54.112 10:35:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.112 10:35:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.490 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.490 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:55.490 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.490 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.490 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.490 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.490 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.490 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.490 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.490 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.490 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.490 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.490 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.490 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.490 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.490 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.490 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.490 10:35:12 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:55.490 10:35:12 -- setup/hugepages.sh@89 -- # local node 00:04:55.490 10:35:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.490 10:35:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.490 10:35:12 -- setup/hugepages.sh@92 -- # local surp 00:04:55.490 10:35:12 -- setup/hugepages.sh@93 -- # local resv 00:04:55.490 10:35:12 -- setup/hugepages.sh@94 -- # local anon 00:04:55.490 10:35:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.490 10:35:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.490 10:35:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.490 10:35:12 -- setup/common.sh@18 -- # local node= 00:04:55.490 10:35:12 -- setup/common.sh@19 -- # local var val 00:04:55.490 10:35:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.490 10:35:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.490 10:35:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.490 10:35:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.490 10:35:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.490 10:35:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43764980 kB' 'MemAvailable: 47321688 kB' 'Buffers: 2704 kB' 'Cached: 12198388 kB' 'SwapCached: 0 kB' 'Active: 9307748 kB' 'Inactive: 3518404 kB' 'Active(anon): 8877248 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628244 kB' 'Mapped: 160596 kB' 'Shmem: 8252188 kB' 'KReclaimable: 202852 kB' 'Slab: 578812 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375960 kB' 'KernelStack: 12880 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10007064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.490 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.490 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.491 10:35:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.491 10:35:12 -- setup/common.sh@33 -- # echo 0 00:04:55.491 10:35:12 -- setup/common.sh@33 -- # return 0 00:04:55.491 10:35:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.491 10:35:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.491 10:35:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.491 10:35:12 -- setup/common.sh@18 -- # local node= 00:04:55.491 10:35:12 -- setup/common.sh@19 -- # local var val 00:04:55.491 10:35:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.491 10:35:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.491 10:35:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.491 10:35:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.491 10:35:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.491 10:35:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.491 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43765348 kB' 'MemAvailable: 47322056 kB' 'Buffers: 2704 kB' 'Cached: 12198388 kB' 'SwapCached: 0 kB' 'Active: 9307660 kB' 'Inactive: 3518404 kB' 'Active(anon): 8877160 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628172 kB' 'Mapped: 160680 kB' 'Shmem: 8252188 kB' 'KReclaimable: 202852 kB' 'Slab: 578868 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 376016 kB' 'KernelStack: 12880 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10007076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.492 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.492 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.493 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.493 10:35:12 -- setup/common.sh@33 -- # echo 0 00:04:55.493 10:35:12 -- setup/common.sh@33 -- # return 0 00:04:55.493 10:35:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.493 10:35:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.493 10:35:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.493 10:35:12 -- setup/common.sh@18 -- # local node= 00:04:55.493 10:35:12 -- setup/common.sh@19 -- # local var val 00:04:55.493 10:35:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.493 10:35:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.493 10:35:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.493 10:35:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.493 10:35:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.493 10:35:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.493 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43765680 kB' 'MemAvailable: 47322388 kB' 'Buffers: 2704 kB' 'Cached: 12198400 kB' 'SwapCached: 0 kB' 'Active: 9307564 kB' 'Inactive: 3518404 kB' 'Active(anon): 8877064 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628048 kB' 'Mapped: 160600 kB' 'Shmem: 8252200 kB' 'KReclaimable: 202852 kB' 'Slab: 578876 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 376024 kB' 'KernelStack: 12912 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10007088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.494 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.494 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.495 10:35:12 -- setup/common.sh@33 -- # echo 0 00:04:55.495 10:35:12 -- setup/common.sh@33 -- # return 0 00:04:55.495 10:35:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.495 10:35:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.495 nr_hugepages=1024 00:04:55.495 10:35:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.495 resv_hugepages=0 00:04:55.495 10:35:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.495 surplus_hugepages=0 00:04:55.495 10:35:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.495 anon_hugepages=0 00:04:55.495 10:35:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.495 10:35:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.495 10:35:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.495 10:35:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.495 10:35:12 -- setup/common.sh@18 -- # local node= 00:04:55.495 10:35:12 -- setup/common.sh@19 -- # local var val 00:04:55.495 10:35:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.495 10:35:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.495 10:35:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.495 10:35:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.495 10:35:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.495 10:35:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43766024 kB' 'MemAvailable: 47322732 kB' 'Buffers: 2704 kB' 'Cached: 12198404 kB' 'SwapCached: 0 kB' 'Active: 9307288 kB' 'Inactive: 3518404 kB' 'Active(anon): 8876788 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627768 kB' 'Mapped: 160600 kB' 'Shmem: 8252204 kB' 'KReclaimable: 202852 kB' 'Slab: 578876 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 376024 kB' 'KernelStack: 12912 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 10007104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.495 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.495 10:35:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.496 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.496 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.497 10:35:12 -- setup/common.sh@33 -- # echo 1024 00:04:55.497 10:35:12 -- setup/common.sh@33 -- # return 0 00:04:55.497 10:35:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.497 10:35:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.497 10:35:12 -- setup/hugepages.sh@27 -- # local node 00:04:55.497 10:35:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.497 10:35:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.497 10:35:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.497 10:35:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.497 10:35:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.497 10:35:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.497 10:35:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.497 10:35:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.497 10:35:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.497 10:35:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.497 10:35:12 -- setup/common.sh@18 -- # local node=0 00:04:55.497 10:35:12 -- setup/common.sh@19 -- # local var val 00:04:55.497 10:35:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.497 10:35:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.497 10:35:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.497 10:35:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.497 10:35:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.497 10:35:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20546788 kB' 'MemUsed: 12283096 kB' 'SwapCached: 0 kB' 'Active: 7000132 kB' 'Inactive: 3242024 kB' 'Active(anon): 6887064 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9883552 kB' 'Mapped: 42600 kB' 'AnonPages: 361676 kB' 'Shmem: 6528460 kB' 'KernelStack: 7160 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102116 kB' 'Slab: 327024 kB' 'SReclaimable: 102116 kB' 'SUnreclaim: 224908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.497 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.497 10:35:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.498 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.498 10:35:12 -- setup/common.sh@33 -- # echo 0 00:04:55.498 10:35:12 -- setup/common.sh@33 -- # return 0 00:04:55.498 10:35:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.498 10:35:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.498 10:35:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.498 10:35:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:55.498 10:35:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.498 10:35:12 -- setup/common.sh@18 -- # local node=1 00:04:55.498 10:35:12 -- setup/common.sh@19 -- # local var val 00:04:55.498 10:35:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.498 10:35:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.498 10:35:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:55.498 10:35:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:55.498 10:35:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.498 10:35:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.498 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 23218984 kB' 'MemUsed: 4492868 kB' 'SwapCached: 0 kB' 'Active: 2307572 kB' 'Inactive: 276380 kB' 'Active(anon): 1990140 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 276380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2317592 kB' 'Mapped: 118000 kB' 'AnonPages: 266464 kB' 'Shmem: 1723780 kB' 'KernelStack: 5752 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100736 kB' 'Slab: 251852 kB' 'SReclaimable: 100736 kB' 'SUnreclaim: 151116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.499 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.499 10:35:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.500 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.500 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.500 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.500 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.500 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.500 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.500 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.500 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.500 10:35:12 -- setup/common.sh@32 -- # continue 00:04:55.500 10:35:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.500 10:35:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.500 10:35:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.500 10:35:12 -- setup/common.sh@33 -- # echo 0 00:04:55.500 10:35:12 -- setup/common.sh@33 -- # return 0 00:04:55.500 10:35:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.500 10:35:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.500 10:35:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.500 10:35:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.500 10:35:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:55.500 node0=512 expecting 512 00:04:55.500 10:35:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.500 10:35:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.500 10:35:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.500 10:35:12 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:55.500 node1=512 expecting 512 00:04:55.500 10:35:12 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:55.500 00:04:55.500 real 0m1.335s 00:04:55.500 user 0m0.551s 00:04:55.500 sys 0m0.745s 00:04:55.500 10:35:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.500 10:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:55.500 ************************************ 00:04:55.500 END TEST even_2G_alloc 00:04:55.500 ************************************ 00:04:55.500 10:35:12 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:55.500 10:35:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.500 10:35:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.500 10:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:55.500 ************************************ 00:04:55.500 START TEST odd_alloc 00:04:55.500 ************************************ 00:04:55.500 10:35:12 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:55.500 10:35:12 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:55.500 10:35:12 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:55.500 10:35:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.500 10:35:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.500 10:35:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:55.500 10:35:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.500 10:35:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.500 10:35:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.500 10:35:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:55.500 10:35:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.500 10:35:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.500 10:35:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.500 10:35:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.500 10:35:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.500 10:35:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.500 10:35:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:55.500 10:35:12 -- setup/hugepages.sh@83 -- # : 513 00:04:55.500 10:35:12 -- setup/hugepages.sh@84 -- # : 1 00:04:55.500 10:35:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.500 10:35:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:55.500 10:35:12 -- setup/hugepages.sh@83 -- # : 0 00:04:55.500 10:35:12 -- setup/hugepages.sh@84 -- # : 0 00:04:55.500 10:35:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.500 10:35:12 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:55.500 10:35:12 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:55.500 10:35:12 -- setup/hugepages.sh@160 -- # setup output 00:04:55.500 10:35:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.500 10:35:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.877 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.877 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.877 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.877 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.877 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.877 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.877 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.877 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.877 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.877 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.877 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.877 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.877 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.877 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.877 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.877 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.877 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.877 10:35:13 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:56.877 10:35:13 -- setup/hugepages.sh@89 -- # local node 00:04:56.877 10:35:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.877 10:35:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.877 10:35:13 -- setup/hugepages.sh@92 -- # local surp 00:04:56.877 10:35:13 -- setup/hugepages.sh@93 -- # local resv 00:04:56.877 10:35:13 -- setup/hugepages.sh@94 -- # local anon 00:04:56.877 10:35:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.877 10:35:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.877 10:35:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.878 10:35:13 -- setup/common.sh@18 -- # local node= 00:04:56.878 10:35:13 -- setup/common.sh@19 -- # local var val 00:04:56.878 10:35:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.878 10:35:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.878 10:35:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.878 10:35:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.878 10:35:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.878 10:35:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43753140 kB' 'MemAvailable: 47309848 kB' 'Buffers: 2704 kB' 'Cached: 12198480 kB' 'SwapCached: 0 kB' 'Active: 9304788 kB' 'Inactive: 3518404 kB' 'Active(anon): 8874288 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625288 kB' 'Mapped: 159756 kB' 'Shmem: 8252280 kB' 'KReclaimable: 202852 kB' 'Slab: 578680 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375828 kB' 'KernelStack: 12864 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 9993604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.878 10:35:13 -- setup/common.sh@33 -- # echo 0 00:04:56.878 10:35:13 -- setup/common.sh@33 -- # return 0 00:04:56.878 10:35:13 -- setup/hugepages.sh@97 -- # anon=0 00:04:56.878 10:35:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.878 10:35:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.878 10:35:13 -- setup/common.sh@18 -- # local node= 00:04:56.878 10:35:13 -- setup/common.sh@19 -- # local var val 00:04:56.878 10:35:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.878 10:35:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.878 10:35:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.878 10:35:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.878 10:35:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.878 10:35:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43758820 kB' 'MemAvailable: 47315528 kB' 'Buffers: 2704 kB' 'Cached: 12198480 kB' 'SwapCached: 0 kB' 'Active: 9304976 kB' 'Inactive: 3518404 kB' 'Active(anon): 8874476 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625476 kB' 'Mapped: 159764 kB' 'Shmem: 8252280 kB' 'KReclaimable: 202852 kB' 'Slab: 578664 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375812 kB' 'KernelStack: 12816 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 9993616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.878 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.878 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.879 10:35:13 -- setup/common.sh@33 -- # echo 0 00:04:56.879 10:35:13 -- setup/common.sh@33 -- # return 0 00:04:56.879 10:35:13 -- setup/hugepages.sh@99 -- # surp=0 00:04:56.879 10:35:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.879 10:35:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.879 10:35:13 -- setup/common.sh@18 -- # local node= 00:04:56.879 10:35:13 -- setup/common.sh@19 -- # local var val 00:04:56.879 10:35:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.879 10:35:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.879 10:35:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.879 10:35:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.879 10:35:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.879 10:35:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43759112 kB' 'MemAvailable: 47315820 kB' 'Buffers: 2704 kB' 'Cached: 12198480 kB' 'SwapCached: 0 kB' 'Active: 9304276 kB' 'Inactive: 3518404 kB' 'Active(anon): 8873776 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624744 kB' 'Mapped: 159756 kB' 'Shmem: 8252280 kB' 'KReclaimable: 202852 kB' 'Slab: 578704 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375852 kB' 'KernelStack: 12848 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 9993632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.879 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.879 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.880 10:35:13 -- setup/common.sh@33 -- # echo 0 00:04:56.880 10:35:13 -- setup/common.sh@33 -- # return 0 00:04:56.880 10:35:13 -- setup/hugepages.sh@100 -- # resv=0 00:04:56.880 10:35:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:56.880 nr_hugepages=1025 00:04:56.880 10:35:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.880 resv_hugepages=0 00:04:56.880 10:35:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.880 surplus_hugepages=0 00:04:56.880 10:35:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.880 anon_hugepages=0 00:04:56.880 10:35:13 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:56.880 10:35:13 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:56.880 10:35:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.880 10:35:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.880 10:35:13 -- setup/common.sh@18 -- # local node= 00:04:56.880 10:35:13 -- setup/common.sh@19 -- # local var val 00:04:56.880 10:35:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.880 10:35:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.880 10:35:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.880 10:35:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.880 10:35:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.880 10:35:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43758860 kB' 'MemAvailable: 47315568 kB' 'Buffers: 2704 kB' 'Cached: 12198508 kB' 'SwapCached: 0 kB' 'Active: 9304932 kB' 'Inactive: 3518404 kB' 'Active(anon): 8874432 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625404 kB' 'Mapped: 160192 kB' 'Shmem: 8252308 kB' 'KReclaimable: 202852 kB' 'Slab: 578704 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375852 kB' 'KernelStack: 12832 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 9995000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.880 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.880 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.881 10:35:13 -- setup/common.sh@33 -- # echo 1025 00:04:56.881 10:35:13 -- setup/common.sh@33 -- # return 0 00:04:56.881 10:35:13 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:56.881 10:35:13 -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.881 10:35:13 -- setup/hugepages.sh@27 -- # local node 00:04:56.881 10:35:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.881 10:35:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.881 10:35:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.881 10:35:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:56.881 10:35:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.881 10:35:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.881 10:35:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.881 10:35:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.881 10:35:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.881 10:35:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.881 10:35:13 -- setup/common.sh@18 -- # local node=0 00:04:56.881 10:35:13 -- setup/common.sh@19 -- # local var val 00:04:56.881 10:35:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.881 10:35:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.881 10:35:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.881 10:35:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.881 10:35:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.881 10:35:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.881 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.881 10:35:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20544976 kB' 'MemUsed: 12284908 kB' 'SwapCached: 0 kB' 'Active: 6999396 kB' 'Inactive: 3242024 kB' 'Active(anon): 6886328 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9883556 kB' 'Mapped: 41984 kB' 'AnonPages: 360944 kB' 'Shmem: 6528464 kB' 'KernelStack: 7176 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102116 kB' 'Slab: 327096 kB' 'SReclaimable: 102116 kB' 'SUnreclaim: 224980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@33 -- # echo 0 00:04:56.882 10:35:13 -- setup/common.sh@33 -- # return 0 00:04:56.882 10:35:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.882 10:35:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.882 10:35:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.882 10:35:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:56.882 10:35:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.882 10:35:13 -- setup/common.sh@18 -- # local node=1 00:04:56.882 10:35:13 -- setup/common.sh@19 -- # local var val 00:04:56.882 10:35:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.882 10:35:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.882 10:35:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:56.882 10:35:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:56.882 10:35:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.882 10:35:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 23214000 kB' 'MemUsed: 4497852 kB' 'SwapCached: 0 kB' 'Active: 2310700 kB' 'Inactive: 276380 kB' 'Active(anon): 1993268 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 276380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2317684 kB' 'Mapped: 118488 kB' 'AnonPages: 269672 kB' 'Shmem: 1723872 kB' 'KernelStack: 5672 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100736 kB' 'Slab: 251608 kB' 'SReclaimable: 100736 kB' 'SUnreclaim: 150872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.882 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.882 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # continue 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.883 10:35:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.883 10:35:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.883 10:35:13 -- setup/common.sh@33 -- # echo 0 00:04:56.883 10:35:13 -- setup/common.sh@33 -- # return 0 00:04:56.883 10:35:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.883 10:35:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.883 10:35:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.883 10:35:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:56.883 node0=512 expecting 513 00:04:56.883 10:35:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.883 10:35:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.883 10:35:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.883 10:35:13 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:56.883 node1=513 expecting 512 00:04:56.883 10:35:13 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:56.883 00:04:56.883 real 0m1.413s 00:04:56.883 user 0m0.559s 00:04:56.883 sys 0m0.811s 00:04:56.883 10:35:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.883 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:04:56.883 ************************************ 00:04:56.883 END TEST odd_alloc 00:04:56.883 ************************************ 00:04:56.883 10:35:13 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:56.883 10:35:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.883 10:35:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.883 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:04:56.883 ************************************ 00:04:56.883 START TEST custom_alloc 00:04:56.883 ************************************ 00:04:56.883 10:35:13 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:56.883 10:35:13 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:56.883 10:35:13 -- setup/hugepages.sh@169 -- # local node 00:04:56.883 10:35:13 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:56.883 10:35:13 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:56.883 10:35:13 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:56.883 10:35:13 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:56.883 10:35:13 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:56.883 10:35:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:56.883 10:35:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.883 10:35:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.883 10:35:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.883 10:35:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:56.883 10:35:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.883 10:35:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.883 10:35:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.883 10:35:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:56.883 10:35:13 -- setup/hugepages.sh@83 -- # : 256 00:04:56.883 10:35:13 -- setup/hugepages.sh@84 -- # : 1 00:04:56.883 10:35:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:56.883 10:35:13 -- setup/hugepages.sh@83 -- # : 0 00:04:56.883 10:35:13 -- setup/hugepages.sh@84 -- # : 0 00:04:56.883 10:35:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:56.883 10:35:13 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:56.883 10:35:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.883 10:35:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.883 10:35:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.883 10:35:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.883 10:35:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.883 10:35:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.883 10:35:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.883 10:35:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.883 10:35:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.883 10:35:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.883 10:35:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:56.883 10:35:13 -- setup/hugepages.sh@78 -- # return 0 00:04:56.883 10:35:13 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:56.883 10:35:13 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:56.883 10:35:13 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:56.883 10:35:13 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:56.883 10:35:13 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:56.883 10:35:13 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:56.883 10:35:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.883 10:35:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.883 10:35:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.883 10:35:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.883 10:35:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.883 10:35:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.883 10:35:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:56.883 10:35:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.883 10:35:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:56.883 10:35:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.883 10:35:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:56.883 10:35:13 -- setup/hugepages.sh@78 -- # return 0 00:04:56.883 10:35:13 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:56.883 10:35:13 -- setup/hugepages.sh@187 -- # setup output 00:04:56.883 10:35:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.883 10:35:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.260 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:58.260 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.260 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.260 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.260 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.260 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.260 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.260 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.260 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.260 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.260 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.260 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.260 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.260 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.260 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.260 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.260 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.260 10:35:15 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:58.260 10:35:15 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:58.260 10:35:15 -- setup/hugepages.sh@89 -- # local node 00:04:58.260 10:35:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.260 10:35:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.260 10:35:15 -- setup/hugepages.sh@92 -- # local surp 00:04:58.260 10:35:15 -- setup/hugepages.sh@93 -- # local resv 00:04:58.260 10:35:15 -- setup/hugepages.sh@94 -- # local anon 00:04:58.260 10:35:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.260 10:35:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.260 10:35:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.260 10:35:15 -- setup/common.sh@18 -- # local node= 00:04:58.260 10:35:15 -- setup/common.sh@19 -- # local var val 00:04:58.260 10:35:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.260 10:35:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.260 10:35:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.260 10:35:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.260 10:35:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.260 10:35:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.260 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.260 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.260 10:35:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 42692308 kB' 'MemAvailable: 46249016 kB' 'Buffers: 2704 kB' 'Cached: 12198572 kB' 'SwapCached: 0 kB' 'Active: 9305676 kB' 'Inactive: 3518404 kB' 'Active(anon): 8875176 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625960 kB' 'Mapped: 159764 kB' 'Shmem: 8252372 kB' 'KReclaimable: 202852 kB' 'Slab: 578540 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375688 kB' 'KernelStack: 12832 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 9993696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196272 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:58.260 10:35:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.261 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.261 10:35:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.262 10:35:15 -- setup/common.sh@33 -- # echo 0 00:04:58.262 10:35:15 -- setup/common.sh@33 -- # return 0 00:04:58.262 10:35:15 -- setup/hugepages.sh@97 -- # anon=0 00:04:58.262 10:35:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.262 10:35:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.262 10:35:15 -- setup/common.sh@18 -- # local node= 00:04:58.262 10:35:15 -- setup/common.sh@19 -- # local var val 00:04:58.262 10:35:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.262 10:35:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.262 10:35:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.262 10:35:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.262 10:35:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.262 10:35:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 42693608 kB' 'MemAvailable: 46250316 kB' 'Buffers: 2704 kB' 'Cached: 12198572 kB' 'SwapCached: 0 kB' 'Active: 9304872 kB' 'Inactive: 3518404 kB' 'Active(anon): 8874372 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625216 kB' 'Mapped: 159840 kB' 'Shmem: 8252372 kB' 'KReclaimable: 202852 kB' 'Slab: 578632 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375780 kB' 'KernelStack: 12800 kB' 'PageTables: 7668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 9993708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.262 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.262 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.263 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.263 10:35:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.264 10:35:15 -- setup/common.sh@33 -- # echo 0 00:04:58.264 10:35:15 -- setup/common.sh@33 -- # return 0 00:04:58.264 10:35:15 -- setup/hugepages.sh@99 -- # surp=0 00:04:58.264 10:35:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.264 10:35:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.264 10:35:15 -- setup/common.sh@18 -- # local node= 00:04:58.264 10:35:15 -- setup/common.sh@19 -- # local var val 00:04:58.264 10:35:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.264 10:35:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.264 10:35:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.264 10:35:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.264 10:35:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.264 10:35:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 42694784 kB' 'MemAvailable: 46251492 kB' 'Buffers: 2704 kB' 'Cached: 12198584 kB' 'SwapCached: 0 kB' 'Active: 9304756 kB' 'Inactive: 3518404 kB' 'Active(anon): 8874256 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625100 kB' 'Mapped: 159764 kB' 'Shmem: 8252384 kB' 'KReclaimable: 202852 kB' 'Slab: 578568 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375716 kB' 'KernelStack: 12880 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 9993720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.264 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.264 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.265 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.265 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.266 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.266 10:35:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.526 10:35:15 -- setup/common.sh@33 -- # echo 0 00:04:58.526 10:35:15 -- setup/common.sh@33 -- # return 0 00:04:58.526 10:35:15 -- setup/hugepages.sh@100 -- # resv=0 00:04:58.526 10:35:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:58.526 nr_hugepages=1536 00:04:58.526 10:35:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.526 resv_hugepages=0 00:04:58.526 10:35:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.526 surplus_hugepages=0 00:04:58.526 10:35:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.526 anon_hugepages=0 00:04:58.526 10:35:15 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:58.526 10:35:15 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:58.526 10:35:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.526 10:35:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.526 10:35:15 -- setup/common.sh@18 -- # local node= 00:04:58.526 10:35:15 -- setup/common.sh@19 -- # local var val 00:04:58.526 10:35:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.526 10:35:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.526 10:35:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.526 10:35:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.526 10:35:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.526 10:35:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.526 10:35:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 42694784 kB' 'MemAvailable: 46251492 kB' 'Buffers: 2704 kB' 'Cached: 12198600 kB' 'SwapCached: 0 kB' 'Active: 9304780 kB' 'Inactive: 3518404 kB' 'Active(anon): 8874280 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625100 kB' 'Mapped: 159764 kB' 'Shmem: 8252400 kB' 'KReclaimable: 202852 kB' 'Slab: 578568 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375716 kB' 'KernelStack: 12880 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 9993736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.526 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.526 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.527 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.527 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.528 10:35:15 -- setup/common.sh@33 -- # echo 1536 00:04:58.528 10:35:15 -- setup/common.sh@33 -- # return 0 00:04:58.528 10:35:15 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:58.528 10:35:15 -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.528 10:35:15 -- setup/hugepages.sh@27 -- # local node 00:04:58.528 10:35:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.528 10:35:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:58.528 10:35:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.528 10:35:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.528 10:35:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.528 10:35:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.528 10:35:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.528 10:35:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.528 10:35:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.528 10:35:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.528 10:35:15 -- setup/common.sh@18 -- # local node=0 00:04:58.528 10:35:15 -- setup/common.sh@19 -- # local var val 00:04:58.528 10:35:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.528 10:35:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.528 10:35:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.528 10:35:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.528 10:35:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.528 10:35:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20524048 kB' 'MemUsed: 12305836 kB' 'SwapCached: 0 kB' 'Active: 7000008 kB' 'Inactive: 3242024 kB' 'Active(anon): 6886940 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9883560 kB' 'Mapped: 41980 kB' 'AnonPages: 361544 kB' 'Shmem: 6528468 kB' 'KernelStack: 7192 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102116 kB' 'Slab: 326960 kB' 'SReclaimable: 102116 kB' 'SUnreclaim: 224844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.528 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.528 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@33 -- # echo 0 00:04:58.529 10:35:15 -- setup/common.sh@33 -- # return 0 00:04:58.529 10:35:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.529 10:35:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.529 10:35:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.529 10:35:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:58.529 10:35:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.529 10:35:15 -- setup/common.sh@18 -- # local node=1 00:04:58.529 10:35:15 -- setup/common.sh@19 -- # local var val 00:04:58.529 10:35:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.529 10:35:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.529 10:35:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:58.529 10:35:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:58.529 10:35:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.529 10:35:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 22170736 kB' 'MemUsed: 5541116 kB' 'SwapCached: 0 kB' 'Active: 2304776 kB' 'Inactive: 276380 kB' 'Active(anon): 1987344 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 276380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2317760 kB' 'Mapped: 117784 kB' 'AnonPages: 263556 kB' 'Shmem: 1723948 kB' 'KernelStack: 5688 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100736 kB' 'Slab: 251608 kB' 'SReclaimable: 100736 kB' 'SUnreclaim: 150872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.529 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.529 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.530 10:35:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.530 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.530 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.530 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.530 10:35:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.530 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.530 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.530 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.530 10:35:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.530 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.530 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.530 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.530 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.530 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.530 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.530 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.530 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.530 10:35:15 -- setup/common.sh@32 -- # continue 00:04:58.530 10:35:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.530 10:35:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.530 10:35:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.530 10:35:15 -- setup/common.sh@33 -- # echo 0 00:04:58.530 10:35:15 -- setup/common.sh@33 -- # return 0 00:04:58.530 10:35:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.530 10:35:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.530 10:35:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.530 10:35:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.530 10:35:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:58.530 node0=512 expecting 512 00:04:58.530 10:35:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.530 10:35:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.530 10:35:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.530 10:35:15 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:58.530 node1=1024 expecting 1024 00:04:58.530 10:35:15 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:58.530 00:04:58.530 real 0m1.489s 00:04:58.530 user 0m0.665s 00:04:58.530 sys 0m0.793s 00:04:58.530 10:35:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.530 10:35:15 -- common/autotest_common.sh@10 -- # set +x 00:04:58.530 ************************************ 00:04:58.530 END TEST custom_alloc 00:04:58.530 ************************************ 00:04:58.530 10:35:15 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:58.530 10:35:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.530 10:35:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.530 10:35:15 -- common/autotest_common.sh@10 -- # set +x 00:04:58.530 ************************************ 00:04:58.530 START TEST no_shrink_alloc 00:04:58.530 ************************************ 00:04:58.530 10:35:15 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:58.530 10:35:15 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:58.530 10:35:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.530 10:35:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:58.530 10:35:15 -- setup/hugepages.sh@51 -- # shift 00:04:58.530 10:35:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:58.530 10:35:15 -- setup/hugepages.sh@52 -- # local node_ids 00:04:58.530 10:35:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.530 10:35:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.530 10:35:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:58.530 10:35:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:58.530 10:35:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.530 10:35:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.530 10:35:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.530 10:35:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.530 10:35:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.530 10:35:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:58.530 10:35:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.530 10:35:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:58.530 10:35:15 -- setup/hugepages.sh@73 -- # return 0 00:04:58.530 10:35:15 -- setup/hugepages.sh@198 -- # setup output 00:04:58.530 10:35:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.530 10:35:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.908 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.908 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.908 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.908 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.908 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.908 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.908 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.908 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.908 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.908 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.908 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.908 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.908 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.908 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.908 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.908 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.908 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.908 10:35:16 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:59.908 10:35:16 -- setup/hugepages.sh@89 -- # local node 00:04:59.908 10:35:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.908 10:35:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.908 10:35:16 -- setup/hugepages.sh@92 -- # local surp 00:04:59.908 10:35:16 -- setup/hugepages.sh@93 -- # local resv 00:04:59.908 10:35:16 -- setup/hugepages.sh@94 -- # local anon 00:04:59.908 10:35:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.908 10:35:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.908 10:35:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.908 10:35:16 -- setup/common.sh@18 -- # local node= 00:04:59.908 10:35:16 -- setup/common.sh@19 -- # local var val 00:04:59.908 10:35:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.908 10:35:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.908 10:35:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.908 10:35:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.908 10:35:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.908 10:35:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43739512 kB' 'MemAvailable: 47296220 kB' 'Buffers: 2704 kB' 'Cached: 12198668 kB' 'SwapCached: 0 kB' 'Active: 9305436 kB' 'Inactive: 3518404 kB' 'Active(anon): 8874936 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625616 kB' 'Mapped: 159788 kB' 'Shmem: 8252468 kB' 'KReclaimable: 202852 kB' 'Slab: 578612 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375760 kB' 'KernelStack: 12896 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 9993920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196288 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.908 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.908 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.909 10:35:16 -- setup/common.sh@33 -- # echo 0 00:04:59.909 10:35:16 -- setup/common.sh@33 -- # return 0 00:04:59.909 10:35:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:59.909 10:35:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.909 10:35:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.909 10:35:16 -- setup/common.sh@18 -- # local node= 00:04:59.909 10:35:16 -- setup/common.sh@19 -- # local var val 00:04:59.909 10:35:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.909 10:35:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.909 10:35:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.909 10:35:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.909 10:35:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.909 10:35:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43740384 kB' 'MemAvailable: 47297092 kB' 'Buffers: 2704 kB' 'Cached: 12198668 kB' 'SwapCached: 0 kB' 'Active: 9305756 kB' 'Inactive: 3518404 kB' 'Active(anon): 8875256 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625960 kB' 'Mapped: 159848 kB' 'Shmem: 8252468 kB' 'KReclaimable: 202852 kB' 'Slab: 578676 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375824 kB' 'KernelStack: 12928 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 9993932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.909 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.909 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.910 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.910 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.911 10:35:16 -- setup/common.sh@33 -- # echo 0 00:04:59.911 10:35:16 -- setup/common.sh@33 -- # return 0 00:04:59.911 10:35:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:59.911 10:35:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.911 10:35:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.911 10:35:16 -- setup/common.sh@18 -- # local node= 00:04:59.911 10:35:16 -- setup/common.sh@19 -- # local var val 00:04:59.911 10:35:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.911 10:35:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.911 10:35:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.911 10:35:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.911 10:35:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.911 10:35:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43741792 kB' 'MemAvailable: 47298500 kB' 'Buffers: 2704 kB' 'Cached: 12198680 kB' 'SwapCached: 0 kB' 'Active: 9304804 kB' 'Inactive: 3518404 kB' 'Active(anon): 8874304 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624940 kB' 'Mapped: 159768 kB' 'Shmem: 8252480 kB' 'KReclaimable: 202852 kB' 'Slab: 578652 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375800 kB' 'KernelStack: 12896 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 9993948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.911 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.911 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.912 10:35:16 -- setup/common.sh@33 -- # echo 0 00:04:59.912 10:35:16 -- setup/common.sh@33 -- # return 0 00:04:59.912 10:35:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:59.912 10:35:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.912 nr_hugepages=1024 00:04:59.912 10:35:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.912 resv_hugepages=0 00:04:59.912 10:35:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.912 surplus_hugepages=0 00:04:59.912 10:35:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.912 anon_hugepages=0 00:04:59.912 10:35:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.912 10:35:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.912 10:35:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.912 10:35:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.912 10:35:16 -- setup/common.sh@18 -- # local node= 00:04:59.912 10:35:16 -- setup/common.sh@19 -- # local var val 00:04:59.912 10:35:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.912 10:35:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.912 10:35:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.912 10:35:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.912 10:35:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.912 10:35:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43741944 kB' 'MemAvailable: 47298652 kB' 'Buffers: 2704 kB' 'Cached: 12198696 kB' 'SwapCached: 0 kB' 'Active: 9305160 kB' 'Inactive: 3518404 kB' 'Active(anon): 8874660 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625292 kB' 'Mapped: 159768 kB' 'Shmem: 8252496 kB' 'KReclaimable: 202852 kB' 'Slab: 578652 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375800 kB' 'KernelStack: 12896 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 9993960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.912 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.912 10:35:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.913 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.913 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.914 10:35:16 -- setup/common.sh@33 -- # echo 1024 00:04:59.914 10:35:16 -- setup/common.sh@33 -- # return 0 00:04:59.914 10:35:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.914 10:35:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.914 10:35:16 -- setup/hugepages.sh@27 -- # local node 00:04:59.914 10:35:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.914 10:35:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.914 10:35:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.914 10:35:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:59.914 10:35:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.914 10:35:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.914 10:35:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.914 10:35:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.914 10:35:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.914 10:35:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.914 10:35:16 -- setup/common.sh@18 -- # local node=0 00:04:59.914 10:35:16 -- setup/common.sh@19 -- # local var val 00:04:59.914 10:35:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.914 10:35:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.914 10:35:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.914 10:35:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.914 10:35:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.914 10:35:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19469460 kB' 'MemUsed: 13360424 kB' 'SwapCached: 0 kB' 'Active: 6999896 kB' 'Inactive: 3242024 kB' 'Active(anon): 6886828 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9883568 kB' 'Mapped: 41980 kB' 'AnonPages: 361428 kB' 'Shmem: 6528476 kB' 'KernelStack: 7176 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102116 kB' 'Slab: 327104 kB' 'SReclaimable: 102116 kB' 'SUnreclaim: 224988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.914 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.914 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.915 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.915 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.915 10:35:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.915 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.915 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.915 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.915 10:35:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.915 10:35:16 -- setup/common.sh@32 -- # continue 00:04:59.915 10:35:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.915 10:35:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.915 10:35:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.915 10:35:16 -- setup/common.sh@33 -- # echo 0 00:04:59.915 10:35:16 -- setup/common.sh@33 -- # return 0 00:04:59.915 10:35:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.915 10:35:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.915 10:35:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.915 10:35:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.915 10:35:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:59.915 node0=1024 expecting 1024 00:04:59.915 10:35:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:59.915 10:35:16 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:59.915 10:35:16 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:59.915 10:35:16 -- setup/hugepages.sh@202 -- # setup output 00:04:59.915 10:35:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.915 10:35:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.362 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:01.362 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:01.362 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:01.362 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:01.363 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:01.363 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:01.363 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:01.363 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:01.363 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:01.363 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:01.363 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:01.363 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:01.363 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:01.363 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:01.363 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:01.363 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:01.363 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:01.363 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:01.363 10:35:17 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:01.363 10:35:17 -- setup/hugepages.sh@89 -- # local node 00:05:01.363 10:35:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.363 10:35:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.363 10:35:17 -- setup/hugepages.sh@92 -- # local surp 00:05:01.363 10:35:17 -- setup/hugepages.sh@93 -- # local resv 00:05:01.363 10:35:17 -- setup/hugepages.sh@94 -- # local anon 00:05:01.363 10:35:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.363 10:35:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.363 10:35:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.363 10:35:17 -- setup/common.sh@18 -- # local node= 00:05:01.363 10:35:17 -- setup/common.sh@19 -- # local var val 00:05:01.363 10:35:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.363 10:35:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.363 10:35:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.363 10:35:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.363 10:35:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.363 10:35:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43743520 kB' 'MemAvailable: 47300228 kB' 'Buffers: 2704 kB' 'Cached: 12198744 kB' 'SwapCached: 0 kB' 'Active: 9305344 kB' 'Inactive: 3518404 kB' 'Active(anon): 8874844 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625488 kB' 'Mapped: 159784 kB' 'Shmem: 8252544 kB' 'KReclaimable: 202852 kB' 'Slab: 578524 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375672 kB' 'KernelStack: 12880 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 9994256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196288 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.363 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.363 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.364 10:35:17 -- setup/common.sh@33 -- # echo 0 00:05:01.364 10:35:17 -- setup/common.sh@33 -- # return 0 00:05:01.364 10:35:17 -- setup/hugepages.sh@97 -- # anon=0 00:05:01.364 10:35:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.364 10:35:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.364 10:35:17 -- setup/common.sh@18 -- # local node= 00:05:01.364 10:35:17 -- setup/common.sh@19 -- # local var val 00:05:01.364 10:35:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.364 10:35:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.364 10:35:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.364 10:35:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.364 10:35:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.364 10:35:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43752624 kB' 'MemAvailable: 47309332 kB' 'Buffers: 2704 kB' 'Cached: 12198744 kB' 'SwapCached: 0 kB' 'Active: 9305660 kB' 'Inactive: 3518404 kB' 'Active(anon): 8875160 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625796 kB' 'Mapped: 159784 kB' 'Shmem: 8252544 kB' 'KReclaimable: 202852 kB' 'Slab: 578476 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375624 kB' 'KernelStack: 12848 kB' 'PageTables: 7716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 9994268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.364 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.364 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.365 10:35:17 -- setup/common.sh@33 -- # echo 0 00:05:01.365 10:35:17 -- setup/common.sh@33 -- # return 0 00:05:01.365 10:35:17 -- setup/hugepages.sh@99 -- # surp=0 00:05:01.365 10:35:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.365 10:35:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.365 10:35:17 -- setup/common.sh@18 -- # local node= 00:05:01.365 10:35:17 -- setup/common.sh@19 -- # local var val 00:05:01.365 10:35:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.365 10:35:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.365 10:35:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.365 10:35:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.365 10:35:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.365 10:35:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43752800 kB' 'MemAvailable: 47309508 kB' 'Buffers: 2704 kB' 'Cached: 12198744 kB' 'SwapCached: 0 kB' 'Active: 9305608 kB' 'Inactive: 3518404 kB' 'Active(anon): 8875108 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625732 kB' 'Mapped: 159776 kB' 'Shmem: 8252544 kB' 'KReclaimable: 202852 kB' 'Slab: 578556 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375704 kB' 'KernelStack: 12896 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 9994284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.365 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.365 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:17 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.366 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.366 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.366 10:35:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.366 10:35:18 -- setup/common.sh@33 -- # echo 0 00:05:01.366 10:35:18 -- setup/common.sh@33 -- # return 0 00:05:01.366 10:35:18 -- setup/hugepages.sh@100 -- # resv=0 00:05:01.366 10:35:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.366 nr_hugepages=1024 00:05:01.366 10:35:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.366 resv_hugepages=0 00:05:01.366 10:35:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.366 surplus_hugepages=0 00:05:01.366 10:35:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.366 anon_hugepages=0 00:05:01.367 10:35:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.367 10:35:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.367 10:35:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.367 10:35:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.367 10:35:18 -- setup/common.sh@18 -- # local node= 00:05:01.367 10:35:18 -- setup/common.sh@19 -- # local var val 00:05:01.367 10:35:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.367 10:35:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.367 10:35:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.367 10:35:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.367 10:35:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.367 10:35:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43753064 kB' 'MemAvailable: 47309772 kB' 'Buffers: 2704 kB' 'Cached: 12198772 kB' 'SwapCached: 0 kB' 'Active: 9305296 kB' 'Inactive: 3518404 kB' 'Active(anon): 8874796 kB' 'Inactive(anon): 0 kB' 'Active(file): 430500 kB' 'Inactive(file): 3518404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625380 kB' 'Mapped: 159776 kB' 'Shmem: 8252572 kB' 'KReclaimable: 202852 kB' 'Slab: 578556 kB' 'SReclaimable: 202852 kB' 'SUnreclaim: 375704 kB' 'KernelStack: 12912 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 9994296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1881692 kB' 'DirectMap2M: 17960960 kB' 'DirectMap1G: 49283072 kB' 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.367 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.367 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.368 10:35:18 -- setup/common.sh@33 -- # echo 1024 00:05:01.368 10:35:18 -- setup/common.sh@33 -- # return 0 00:05:01.368 10:35:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.368 10:35:18 -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.368 10:35:18 -- setup/hugepages.sh@27 -- # local node 00:05:01.368 10:35:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.368 10:35:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.368 10:35:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.368 10:35:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:01.368 10:35:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:01.368 10:35:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.368 10:35:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.368 10:35:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.368 10:35:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.368 10:35:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.368 10:35:18 -- setup/common.sh@18 -- # local node=0 00:05:01.368 10:35:18 -- setup/common.sh@19 -- # local var val 00:05:01.368 10:35:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.368 10:35:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.368 10:35:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.368 10:35:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.368 10:35:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.368 10:35:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19491780 kB' 'MemUsed: 13338104 kB' 'SwapCached: 0 kB' 'Active: 7000696 kB' 'Inactive: 3242024 kB' 'Active(anon): 6887628 kB' 'Inactive(anon): 0 kB' 'Active(file): 113068 kB' 'Inactive(file): 3242024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9883576 kB' 'Mapped: 41980 kB' 'AnonPages: 362236 kB' 'Shmem: 6528484 kB' 'KernelStack: 7240 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102116 kB' 'Slab: 326976 kB' 'SReclaimable: 102116 kB' 'SUnreclaim: 224860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.368 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.368 10:35:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # continue 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.369 10:35:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.369 10:35:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.369 10:35:18 -- setup/common.sh@33 -- # echo 0 00:05:01.369 10:35:18 -- setup/common.sh@33 -- # return 0 00:05:01.369 10:35:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.369 10:35:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.369 10:35:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.369 10:35:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.369 10:35:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.369 node0=1024 expecting 1024 00:05:01.369 10:35:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.369 00:05:01.369 real 0m2.871s 00:05:01.369 user 0m1.150s 00:05:01.369 sys 0m1.641s 00:05:01.369 10:35:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.369 10:35:18 -- common/autotest_common.sh@10 -- # set +x 00:05:01.369 ************************************ 00:05:01.369 END TEST no_shrink_alloc 00:05:01.369 ************************************ 00:05:01.369 10:35:18 -- setup/hugepages.sh@217 -- # clear_hp 00:05:01.369 10:35:18 -- setup/hugepages.sh@37 -- # local node hp 00:05:01.369 10:35:18 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:01.369 10:35:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.369 10:35:18 -- setup/hugepages.sh@41 -- # echo 0 00:05:01.369 10:35:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.369 10:35:18 -- setup/hugepages.sh@41 -- # echo 0 00:05:01.369 10:35:18 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:01.369 10:35:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.369 10:35:18 -- setup/hugepages.sh@41 -- # echo 0 00:05:01.369 10:35:18 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.369 10:35:18 -- setup/hugepages.sh@41 -- # echo 0 00:05:01.369 10:35:18 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:01.369 10:35:18 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:01.369 00:05:01.369 real 0m11.143s 00:05:01.369 user 0m4.250s 00:05:01.369 sys 0m5.819s 00:05:01.369 10:35:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.369 10:35:18 -- common/autotest_common.sh@10 -- # set +x 00:05:01.369 ************************************ 00:05:01.369 END TEST hugepages 00:05:01.369 ************************************ 00:05:01.369 10:35:18 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:01.369 10:35:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.369 10:35:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.369 10:35:18 -- common/autotest_common.sh@10 -- # set +x 00:05:01.369 ************************************ 00:05:01.369 START TEST driver 00:05:01.369 ************************************ 00:05:01.369 10:35:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:01.369 * Looking for test storage... 00:05:01.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:01.369 10:35:18 -- setup/driver.sh@68 -- # setup reset 00:05:01.369 10:35:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.369 10:35:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.901 10:35:20 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:03.901 10:35:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.901 10:35:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.901 10:35:20 -- common/autotest_common.sh@10 -- # set +x 00:05:03.901 ************************************ 00:05:03.901 START TEST guess_driver 00:05:03.901 ************************************ 00:05:03.901 10:35:20 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:03.901 10:35:20 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:03.901 10:35:20 -- setup/driver.sh@47 -- # local fail=0 00:05:03.901 10:35:20 -- setup/driver.sh@49 -- # pick_driver 00:05:03.901 10:35:20 -- setup/driver.sh@36 -- # vfio 00:05:03.901 10:35:20 -- setup/driver.sh@21 -- # local iommu_grups 00:05:03.901 10:35:20 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:03.901 10:35:20 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:03.901 10:35:20 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:03.901 10:35:20 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:03.901 10:35:20 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:03.901 10:35:20 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:03.901 10:35:20 -- setup/driver.sh@14 -- # mod vfio_pci 00:05:03.901 10:35:20 -- setup/driver.sh@12 -- # dep vfio_pci 00:05:03.901 10:35:20 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:03.901 10:35:20 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:03.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:03.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:03.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:03.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:03.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:03.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:03.901 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:03.901 10:35:20 -- setup/driver.sh@30 -- # return 0 00:05:03.901 10:35:20 -- setup/driver.sh@37 -- # echo vfio-pci 00:05:03.901 10:35:20 -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:03.901 10:35:20 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:03.901 10:35:20 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:03.901 Looking for driver=vfio-pci 00:05:03.901 10:35:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.901 10:35:20 -- setup/driver.sh@45 -- # setup output config 00:05:03.901 10:35:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.901 10:35:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.276 10:35:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.276 10:35:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.276 10:35:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.214 10:35:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.214 10:35:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.214 10:35:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.470 10:35:23 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:06.470 10:35:23 -- setup/driver.sh@65 -- # setup reset 00:05:06.470 10:35:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.470 10:35:23 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:08.998 00:05:08.998 real 0m4.816s 00:05:08.998 user 0m1.087s 00:05:08.998 sys 0m1.864s 00:05:08.998 10:35:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.998 10:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:08.998 ************************************ 00:05:08.998 END TEST guess_driver 00:05:08.998 ************************************ 00:05:08.998 00:05:08.998 real 0m7.401s 00:05:08.998 user 0m1.653s 00:05:08.998 sys 0m2.879s 00:05:08.998 10:35:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.998 10:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:08.998 ************************************ 00:05:08.998 END TEST driver 00:05:08.998 ************************************ 00:05:08.998 10:35:25 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:08.998 10:35:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:08.998 10:35:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.998 10:35:25 -- common/autotest_common.sh@10 -- # set +x 00:05:08.998 ************************************ 00:05:08.998 START TEST devices 00:05:08.998 ************************************ 00:05:08.998 10:35:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:08.998 * Looking for test storage... 00:05:08.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:08.998 10:35:25 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:08.998 10:35:25 -- setup/devices.sh@192 -- # setup reset 00:05:08.998 10:35:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.998 10:35:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.370 10:35:27 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:10.370 10:35:27 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:10.370 10:35:27 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:10.370 10:35:27 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:10.370 10:35:27 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:10.370 10:35:27 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:10.370 10:35:27 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:10.370 10:35:27 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.370 10:35:27 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:10.370 10:35:27 -- setup/devices.sh@196 -- # blocks=() 00:05:10.370 10:35:27 -- setup/devices.sh@196 -- # declare -a blocks 00:05:10.370 10:35:27 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:10.370 10:35:27 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:10.370 10:35:27 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:10.370 10:35:27 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:10.370 10:35:27 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:10.370 10:35:27 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:10.370 10:35:27 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:10.370 10:35:27 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:10.370 10:35:27 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:10.370 10:35:27 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:10.370 10:35:27 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:10.370 No valid GPT data, bailing 00:05:10.370 10:35:27 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:10.370 10:35:27 -- scripts/common.sh@393 -- # pt= 00:05:10.370 10:35:27 -- scripts/common.sh@394 -- # return 1 00:05:10.370 10:35:27 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:10.370 10:35:27 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:10.370 10:35:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:10.370 10:35:27 -- setup/common.sh@80 -- # echo 1000204886016 00:05:10.370 10:35:27 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:10.370 10:35:27 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:10.370 10:35:27 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:10.370 10:35:27 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:10.370 10:35:27 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:10.370 10:35:27 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:10.370 10:35:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.370 10:35:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.370 10:35:27 -- common/autotest_common.sh@10 -- # set +x 00:05:10.370 ************************************ 00:05:10.370 START TEST nvme_mount 00:05:10.370 ************************************ 00:05:10.370 10:35:27 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:10.370 10:35:27 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:10.370 10:35:27 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:10.370 10:35:27 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.370 10:35:27 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.370 10:35:27 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:10.370 10:35:27 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:10.370 10:35:27 -- setup/common.sh@40 -- # local part_no=1 00:05:10.370 10:35:27 -- setup/common.sh@41 -- # local size=1073741824 00:05:10.370 10:35:27 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:10.370 10:35:27 -- setup/common.sh@44 -- # parts=() 00:05:10.370 10:35:27 -- setup/common.sh@44 -- # local parts 00:05:10.370 10:35:27 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:10.370 10:35:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.370 10:35:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:10.370 10:35:27 -- setup/common.sh@46 -- # (( part++ )) 00:05:10.370 10:35:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.370 10:35:27 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:10.370 10:35:27 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:10.370 10:35:27 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:11.304 Creating new GPT entries in memory. 00:05:11.304 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:11.304 other utilities. 00:05:11.304 10:35:28 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:11.304 10:35:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.304 10:35:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:11.304 10:35:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:11.304 10:35:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:12.678 Creating new GPT entries in memory. 00:05:12.678 The operation has completed successfully. 00:05:12.678 10:35:29 -- setup/common.sh@57 -- # (( part++ )) 00:05:12.678 10:35:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.678 10:35:29 -- setup/common.sh@62 -- # wait 3323462 00:05:12.678 10:35:29 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.678 10:35:29 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:12.678 10:35:29 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.678 10:35:29 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:12.678 10:35:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:12.678 10:35:29 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.678 10:35:29 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.678 10:35:29 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:12.678 10:35:29 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:12.678 10:35:29 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.678 10:35:29 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.678 10:35:29 -- setup/devices.sh@53 -- # local found=0 00:05:12.678 10:35:29 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.678 10:35:29 -- setup/devices.sh@56 -- # : 00:05:12.678 10:35:29 -- setup/devices.sh@59 -- # local pci status 00:05:12.678 10:35:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.678 10:35:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:12.678 10:35:29 -- setup/devices.sh@47 -- # setup output config 00:05:12.678 10:35:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.678 10:35:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:13.611 10:35:30 -- setup/devices.sh@63 -- # found=1 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.611 10:35:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.611 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.869 10:35:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.870 10:35:30 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:13.870 10:35:30 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.870 10:35:30 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.870 10:35:30 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.870 10:35:30 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:13.870 10:35:30 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.870 10:35:30 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.870 10:35:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.870 10:35:30 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:13.870 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.870 10:35:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.870 10:35:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.127 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:14.127 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:14.127 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:14.127 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:14.127 10:35:30 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:14.127 10:35:30 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:14.127 10:35:30 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.127 10:35:30 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:14.127 10:35:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:14.127 10:35:30 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.127 10:35:30 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.127 10:35:30 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:14.127 10:35:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:14.127 10:35:30 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.127 10:35:30 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.127 10:35:30 -- setup/devices.sh@53 -- # local found=0 00:05:14.127 10:35:30 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.127 10:35:30 -- setup/devices.sh@56 -- # : 00:05:14.127 10:35:30 -- setup/devices.sh@59 -- # local pci status 00:05:14.127 10:35:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.127 10:35:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:14.127 10:35:30 -- setup/devices.sh@47 -- # setup output config 00:05:14.127 10:35:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.127 10:35:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:15.061 10:35:31 -- setup/devices.sh@63 -- # found=1 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.061 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.061 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.318 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.318 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.318 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.318 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.318 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.318 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.318 10:35:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.318 10:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.318 10:35:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.318 10:35:32 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:15.318 10:35:32 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.318 10:35:32 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.318 10:35:32 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.318 10:35:32 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.318 10:35:32 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:15.318 10:35:32 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:15.318 10:35:32 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:15.318 10:35:32 -- setup/devices.sh@50 -- # local mount_point= 00:05:15.318 10:35:32 -- setup/devices.sh@51 -- # local test_file= 00:05:15.318 10:35:32 -- setup/devices.sh@53 -- # local found=0 00:05:15.318 10:35:32 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:15.318 10:35:32 -- setup/devices.sh@59 -- # local pci status 00:05:15.319 10:35:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.319 10:35:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:15.319 10:35:32 -- setup/devices.sh@47 -- # setup output config 00:05:15.319 10:35:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.319 10:35:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:16.691 10:35:33 -- setup/devices.sh@63 -- # found=1 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.691 10:35:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.691 10:35:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:16.691 10:35:33 -- setup/devices.sh@68 -- # return 0 00:05:16.691 10:35:33 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:16.691 10:35:33 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.691 10:35:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.691 10:35:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.691 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.691 00:05:16.691 real 0m6.231s 00:05:16.691 user 0m1.490s 00:05:16.691 sys 0m2.336s 00:05:16.691 10:35:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.691 10:35:33 -- common/autotest_common.sh@10 -- # set +x 00:05:16.691 ************************************ 00:05:16.691 END TEST nvme_mount 00:05:16.691 ************************************ 00:05:16.691 10:35:33 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:16.691 10:35:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.691 10:35:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.691 10:35:33 -- common/autotest_common.sh@10 -- # set +x 00:05:16.691 ************************************ 00:05:16.691 START TEST dm_mount 00:05:16.691 ************************************ 00:05:16.691 10:35:33 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:16.691 10:35:33 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:16.691 10:35:33 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:16.691 10:35:33 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:16.691 10:35:33 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:16.691 10:35:33 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:16.691 10:35:33 -- setup/common.sh@40 -- # local part_no=2 00:05:16.691 10:35:33 -- setup/common.sh@41 -- # local size=1073741824 00:05:16.691 10:35:33 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:16.691 10:35:33 -- setup/common.sh@44 -- # parts=() 00:05:16.691 10:35:33 -- setup/common.sh@44 -- # local parts 00:05:16.691 10:35:33 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:16.691 10:35:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.691 10:35:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:16.691 10:35:33 -- setup/common.sh@46 -- # (( part++ )) 00:05:16.691 10:35:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.691 10:35:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:16.691 10:35:33 -- setup/common.sh@46 -- # (( part++ )) 00:05:16.691 10:35:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:16.691 10:35:33 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:16.691 10:35:33 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:16.691 10:35:33 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:17.624 Creating new GPT entries in memory. 00:05:17.624 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:17.624 other utilities. 00:05:17.624 10:35:34 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:17.624 10:35:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.624 10:35:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:17.624 10:35:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:17.624 10:35:34 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:18.553 Creating new GPT entries in memory. 00:05:18.553 The operation has completed successfully. 00:05:18.553 10:35:35 -- setup/common.sh@57 -- # (( part++ )) 00:05:18.553 10:35:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.553 10:35:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.553 10:35:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.553 10:35:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:19.924 The operation has completed successfully. 00:05:19.924 10:35:36 -- setup/common.sh@57 -- # (( part++ )) 00:05:19.924 10:35:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.924 10:35:36 -- setup/common.sh@62 -- # wait 3325924 00:05:19.924 10:35:36 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:19.924 10:35:36 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.924 10:35:36 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:19.924 10:35:36 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:19.924 10:35:36 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:19.924 10:35:36 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.924 10:35:36 -- setup/devices.sh@161 -- # break 00:05:19.924 10:35:36 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.924 10:35:36 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:19.924 10:35:36 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:19.924 10:35:36 -- setup/devices.sh@166 -- # dm=dm-0 00:05:19.924 10:35:36 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:19.924 10:35:36 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:19.924 10:35:36 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.924 10:35:36 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:19.924 10:35:36 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.924 10:35:36 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:19.924 10:35:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:19.924 10:35:36 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.924 10:35:36 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:19.924 10:35:36 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:19.924 10:35:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:19.924 10:35:36 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.924 10:35:36 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:19.924 10:35:36 -- setup/devices.sh@53 -- # local found=0 00:05:19.924 10:35:36 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:19.924 10:35:36 -- setup/devices.sh@56 -- # : 00:05:19.924 10:35:36 -- setup/devices.sh@59 -- # local pci status 00:05:19.924 10:35:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.924 10:35:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:19.924 10:35:36 -- setup/devices.sh@47 -- # setup output config 00:05:19.924 10:35:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.924 10:35:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:20.856 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.856 10:35:37 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:20.856 10:35:37 -- setup/devices.sh@63 -- # found=1 00:05:20.856 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.857 10:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:20.857 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.115 10:35:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.115 10:35:37 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:21.115 10:35:37 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.115 10:35:37 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.115 10:35:37 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.115 10:35:37 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.115 10:35:37 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:21.115 10:35:37 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:21.115 10:35:37 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:21.115 10:35:37 -- setup/devices.sh@50 -- # local mount_point= 00:05:21.115 10:35:37 -- setup/devices.sh@51 -- # local test_file= 00:05:21.115 10:35:37 -- setup/devices.sh@53 -- # local found=0 00:05:21.115 10:35:37 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:21.115 10:35:37 -- setup/devices.sh@59 -- # local pci status 00:05:21.115 10:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.115 10:35:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:21.115 10:35:37 -- setup/devices.sh@47 -- # setup output config 00:05:21.115 10:35:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.115 10:35:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:22.047 10:35:38 -- setup/devices.sh@63 -- # found=1 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.047 10:35:38 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.047 10:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.305 10:35:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.305 10:35:38 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.305 10:35:38 -- setup/devices.sh@68 -- # return 0 00:05:22.305 10:35:38 -- setup/devices.sh@187 -- # cleanup_dm 00:05:22.305 10:35:38 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.306 10:35:38 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.306 10:35:38 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:22.306 10:35:38 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.306 10:35:38 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:22.306 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.306 10:35:39 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.306 10:35:39 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:22.306 00:05:22.306 real 0m5.671s 00:05:22.306 user 0m0.978s 00:05:22.306 sys 0m1.581s 00:05:22.306 10:35:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.306 10:35:39 -- common/autotest_common.sh@10 -- # set +x 00:05:22.306 ************************************ 00:05:22.306 END TEST dm_mount 00:05:22.306 ************************************ 00:05:22.306 10:35:39 -- setup/devices.sh@1 -- # cleanup 00:05:22.306 10:35:39 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:22.306 10:35:39 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.306 10:35:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.306 10:35:39 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.306 10:35:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.306 10:35:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.563 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:22.563 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:22.563 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.563 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:22.563 10:35:39 -- setup/devices.sh@12 -- # cleanup_dm 00:05:22.563 10:35:39 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.563 10:35:39 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:22.563 10:35:39 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.563 10:35:39 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:22.563 10:35:39 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.563 10:35:39 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:22.563 00:05:22.563 real 0m13.768s 00:05:22.563 user 0m3.092s 00:05:22.563 sys 0m4.935s 00:05:22.563 10:35:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.563 10:35:39 -- common/autotest_common.sh@10 -- # set +x 00:05:22.563 ************************************ 00:05:22.563 END TEST devices 00:05:22.563 ************************************ 00:05:22.563 00:05:22.563 real 0m42.995s 00:05:22.563 user 0m12.375s 00:05:22.563 sys 0m18.980s 00:05:22.563 10:35:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.563 10:35:39 -- common/autotest_common.sh@10 -- # set +x 00:05:22.563 ************************************ 00:05:22.563 END TEST setup.sh 00:05:22.563 ************************************ 00:05:22.563 10:35:39 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:23.495 Hugepages 00:05:23.495 node hugesize free / total 00:05:23.495 node0 1048576kB 0 / 0 00:05:23.495 node0 2048kB 2048 / 2048 00:05:23.495 node1 1048576kB 0 / 0 00:05:23.752 node1 2048kB 0 / 0 00:05:23.752 00:05:23.752 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:23.752 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:23.752 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:23.752 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:23.752 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:23.752 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:23.752 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:23.752 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:23.753 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:23.753 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:23.753 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:23.753 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:23.753 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:23.753 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:23.753 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:23.753 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:23.753 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:23.753 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:23.753 10:35:40 -- spdk/autotest.sh@141 -- # uname -s 00:05:23.753 10:35:40 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:23.753 10:35:40 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:23.753 10:35:40 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:25.128 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:25.128 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:25.128 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:25.128 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:25.128 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:25.128 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:25.128 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:25.128 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:25.128 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:25.128 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:25.128 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:25.128 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:25.128 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:25.128 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:25.128 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:25.128 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:26.066 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:26.066 10:35:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:27.002 10:35:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:27.002 10:35:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:27.002 10:35:43 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:27.002 10:35:43 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:27.002 10:35:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:27.002 10:35:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:27.002 10:35:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.002 10:35:43 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:27.002 10:35:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:27.260 10:35:43 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:27.260 10:35:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:27.260 10:35:43 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:28.194 Waiting for block devices as requested 00:05:28.194 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:28.453 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:28.453 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:28.712 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:28.712 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:28.712 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:28.712 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:28.712 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:28.969 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:28.969 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:28.969 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:28.969 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:29.227 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:29.227 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:29.227 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:29.486 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:29.486 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:29.486 10:35:46 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:29.486 10:35:46 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:29.486 10:35:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:29.486 10:35:46 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:29.486 10:35:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:29.486 10:35:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:29.486 10:35:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:29.486 10:35:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:29.486 10:35:46 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:29.486 10:35:46 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:29.486 10:35:46 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:29.486 10:35:46 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:29.486 10:35:46 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:29.486 10:35:46 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:05:29.486 10:35:46 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:29.486 10:35:46 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:29.486 10:35:46 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:29.486 10:35:46 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:29.486 10:35:46 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:29.486 10:35:46 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:29.486 10:35:46 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:29.486 10:35:46 -- common/autotest_common.sh@1542 -- # continue 00:05:29.486 10:35:46 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:29.486 10:35:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:29.486 10:35:46 -- common/autotest_common.sh@10 -- # set +x 00:05:29.486 10:35:46 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:29.486 10:35:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:29.486 10:35:46 -- common/autotest_common.sh@10 -- # set +x 00:05:29.486 10:35:46 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:30.862 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:30.862 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:30.862 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:30.862 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:30.862 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:30.862 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:30.862 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:30.862 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:30.862 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:30.862 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:30.862 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:30.862 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:30.862 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:30.862 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:30.862 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:30.862 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:31.798 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:31.798 10:35:48 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:31.798 10:35:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:31.798 10:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:31.798 10:35:48 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:31.798 10:35:48 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:31.798 10:35:48 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:31.798 10:35:48 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:31.798 10:35:48 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:31.798 10:35:48 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:31.798 10:35:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:31.798 10:35:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:32.057 10:35:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:32.057 10:35:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:32.057 10:35:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:32.057 10:35:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:32.057 10:35:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:32.057 10:35:48 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:32.057 10:35:48 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:32.057 10:35:48 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:32.057 10:35:48 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:32.057 10:35:48 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:32.057 10:35:48 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:32.057 10:35:48 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:32.057 10:35:48 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3331216 00:05:32.057 10:35:48 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.057 10:35:48 -- common/autotest_common.sh@1583 -- # waitforlisten 3331216 00:05:32.057 10:35:48 -- common/autotest_common.sh@819 -- # '[' -z 3331216 ']' 00:05:32.057 10:35:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.057 10:35:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.057 10:35:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.057 10:35:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.057 10:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:32.057 [2024-07-10 10:35:48.756032] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:32.057 [2024-07-10 10:35:48.756112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331216 ] 00:05:32.057 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.057 [2024-07-10 10:35:48.819960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.314 [2024-07-10 10:35:48.914257] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.314 [2024-07-10 10:35:48.914439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.248 10:35:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.248 10:35:49 -- common/autotest_common.sh@852 -- # return 0 00:05:33.248 10:35:49 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:33.248 10:35:49 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:33.248 10:35:49 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:36.528 nvme0n1 00:05:36.528 10:35:52 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:36.528 [2024-07-10 10:35:52.993998] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:36.528 [2024-07-10 10:35:52.994046] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:36.528 request: 00:05:36.528 { 00:05:36.528 "nvme_ctrlr_name": "nvme0", 00:05:36.528 "password": "test", 00:05:36.528 "method": "bdev_nvme_opal_revert", 00:05:36.528 "req_id": 1 00:05:36.528 } 00:05:36.528 Got JSON-RPC error response 00:05:36.528 response: 00:05:36.528 { 00:05:36.528 "code": -32603, 00:05:36.528 "message": "Internal error" 00:05:36.528 } 00:05:36.528 10:35:53 -- common/autotest_common.sh@1589 -- # true 00:05:36.528 10:35:53 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:36.528 10:35:53 -- common/autotest_common.sh@1593 -- # killprocess 3331216 00:05:36.528 10:35:53 -- common/autotest_common.sh@926 -- # '[' -z 3331216 ']' 00:05:36.528 10:35:53 -- common/autotest_common.sh@930 -- # kill -0 3331216 00:05:36.528 10:35:53 -- common/autotest_common.sh@931 -- # uname 00:05:36.528 10:35:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:36.528 10:35:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3331216 00:05:36.528 10:35:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:36.528 10:35:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:36.528 10:35:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3331216' 00:05:36.528 killing process with pid 3331216 00:05:36.528 10:35:53 -- common/autotest_common.sh@945 -- # kill 3331216 00:05:36.528 10:35:53 -- common/autotest_common.sh@950 -- # wait 3331216 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.528 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:36.529 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:38.426 10:35:54 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:38.426 10:35:54 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:38.426 10:35:54 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:38.426 10:35:54 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:38.426 10:35:54 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:38.426 10:35:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:38.426 10:35:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.427 10:35:54 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:38.427 10:35:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.427 10:35:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.427 10:35:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.427 ************************************ 00:05:38.427 START TEST env 00:05:38.427 ************************************ 00:05:38.427 10:35:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:38.427 * Looking for test storage... 00:05:38.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:38.427 10:35:54 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:38.427 10:35:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.427 10:35:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.427 10:35:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.427 ************************************ 00:05:38.427 START TEST env_memory 00:05:38.427 ************************************ 00:05:38.427 10:35:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:38.427 00:05:38.427 00:05:38.427 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.427 http://cunit.sourceforge.net/ 00:05:38.427 00:05:38.427 00:05:38.427 Suite: memory 00:05:38.427 Test: alloc and free memory map ...[2024-07-10 10:35:54.870407] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:38.427 passed 00:05:38.427 Test: mem map translation ...[2024-07-10 10:35:54.890463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:38.427 [2024-07-10 10:35:54.890484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:38.427 [2024-07-10 10:35:54.890539] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:38.427 [2024-07-10 10:35:54.890552] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:38.427 passed 00:05:38.427 Test: mem map registration ...[2024-07-10 10:35:54.930956] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:38.427 [2024-07-10 10:35:54.930975] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:38.427 passed 00:05:38.427 Test: mem map adjacent registrations ...passed 00:05:38.427 00:05:38.427 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.427 suites 1 1 n/a 0 0 00:05:38.427 tests 4 4 4 0 0 00:05:38.427 asserts 152 152 152 0 n/a 00:05:38.427 00:05:38.427 Elapsed time = 0.144 seconds 00:05:38.427 00:05:38.427 real 0m0.152s 00:05:38.427 user 0m0.143s 00:05:38.427 sys 0m0.009s 00:05:38.427 10:35:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.427 10:35:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.427 ************************************ 00:05:38.427 END TEST env_memory 00:05:38.427 ************************************ 00:05:38.427 10:35:55 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:38.427 10:35:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.427 10:35:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.427 10:35:55 -- common/autotest_common.sh@10 -- # set +x 00:05:38.427 ************************************ 00:05:38.427 START TEST env_vtophys 00:05:38.427 ************************************ 00:05:38.427 10:35:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:38.427 EAL: lib.eal log level changed from notice to debug 00:05:38.427 EAL: Detected lcore 0 as core 0 on socket 0 00:05:38.427 EAL: Detected lcore 1 as core 1 on socket 0 00:05:38.427 EAL: Detected lcore 2 as core 2 on socket 0 00:05:38.427 EAL: Detected lcore 3 as core 3 on socket 0 00:05:38.427 EAL: Detected lcore 4 as core 4 on socket 0 00:05:38.427 EAL: Detected lcore 5 as core 5 on socket 0 00:05:38.427 EAL: Detected lcore 6 as core 8 on socket 0 00:05:38.427 EAL: Detected lcore 7 as core 9 on socket 0 00:05:38.427 EAL: Detected lcore 8 as core 10 on socket 0 00:05:38.427 EAL: Detected lcore 9 as core 11 on socket 0 00:05:38.427 EAL: Detected lcore 10 as core 12 on socket 0 00:05:38.427 EAL: Detected lcore 11 as core 13 on socket 0 00:05:38.427 EAL: Detected lcore 12 as core 0 on socket 1 00:05:38.427 EAL: Detected lcore 13 as core 1 on socket 1 00:05:38.427 EAL: Detected lcore 14 as core 2 on socket 1 00:05:38.427 EAL: Detected lcore 15 as core 3 on socket 1 00:05:38.427 EAL: Detected lcore 16 as core 4 on socket 1 00:05:38.427 EAL: Detected lcore 17 as core 5 on socket 1 00:05:38.427 EAL: Detected lcore 18 as core 8 on socket 1 00:05:38.427 EAL: Detected lcore 19 as core 9 on socket 1 00:05:38.427 EAL: Detected lcore 20 as core 10 on socket 1 00:05:38.427 EAL: Detected lcore 21 as core 11 on socket 1 00:05:38.427 EAL: Detected lcore 22 as core 12 on socket 1 00:05:38.427 EAL: Detected lcore 23 as core 13 on socket 1 00:05:38.427 EAL: Detected lcore 24 as core 0 on socket 0 00:05:38.427 EAL: Detected lcore 25 as core 1 on socket 0 00:05:38.427 EAL: Detected lcore 26 as core 2 on socket 0 00:05:38.427 EAL: Detected lcore 27 as core 3 on socket 0 00:05:38.427 EAL: Detected lcore 28 as core 4 on socket 0 00:05:38.427 EAL: Detected lcore 29 as core 5 on socket 0 00:05:38.427 EAL: Detected lcore 30 as core 8 on socket 0 00:05:38.427 EAL: Detected lcore 31 as core 9 on socket 0 00:05:38.427 EAL: Detected lcore 32 as core 10 on socket 0 00:05:38.427 EAL: Detected lcore 33 as core 11 on socket 0 00:05:38.427 EAL: Detected lcore 34 as core 12 on socket 0 00:05:38.427 EAL: Detected lcore 35 as core 13 on socket 0 00:05:38.427 EAL: Detected lcore 36 as core 0 on socket 1 00:05:38.427 EAL: Detected lcore 37 as core 1 on socket 1 00:05:38.427 EAL: Detected lcore 38 as core 2 on socket 1 00:05:38.427 EAL: Detected lcore 39 as core 3 on socket 1 00:05:38.427 EAL: Detected lcore 40 as core 4 on socket 1 00:05:38.427 EAL: Detected lcore 41 as core 5 on socket 1 00:05:38.427 EAL: Detected lcore 42 as core 8 on socket 1 00:05:38.427 EAL: Detected lcore 43 as core 9 on socket 1 00:05:38.427 EAL: Detected lcore 44 as core 10 on socket 1 00:05:38.427 EAL: Detected lcore 45 as core 11 on socket 1 00:05:38.427 EAL: Detected lcore 46 as core 12 on socket 1 00:05:38.427 EAL: Detected lcore 47 as core 13 on socket 1 00:05:38.427 EAL: Maximum logical cores by configuration: 128 00:05:38.427 EAL: Detected CPU lcores: 48 00:05:38.427 EAL: Detected NUMA nodes: 2 00:05:38.427 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:38.427 EAL: Detected shared linkage of DPDK 00:05:38.427 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:38.427 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:38.427 EAL: Registered [vdev] bus. 00:05:38.427 EAL: bus.vdev log level changed from disabled to notice 00:05:38.427 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:38.427 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:38.427 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:38.427 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:38.427 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:38.427 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:38.427 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:38.427 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:38.427 EAL: No shared files mode enabled, IPC will be disabled 00:05:38.427 EAL: No shared files mode enabled, IPC is disabled 00:05:38.427 EAL: Bus pci wants IOVA as 'DC' 00:05:38.427 EAL: Bus vdev wants IOVA as 'DC' 00:05:38.427 EAL: Buses did not request a specific IOVA mode. 00:05:38.427 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:38.427 EAL: Selected IOVA mode 'VA' 00:05:38.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.427 EAL: Probing VFIO support... 00:05:38.427 EAL: IOMMU type 1 (Type 1) is supported 00:05:38.427 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:38.427 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:38.427 EAL: VFIO support initialized 00:05:38.427 EAL: Ask a virtual area of 0x2e000 bytes 00:05:38.427 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:38.427 EAL: Setting up physically contiguous memory... 00:05:38.427 EAL: Setting maximum number of open files to 524288 00:05:38.427 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:38.427 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:38.427 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:38.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.427 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:38.427 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.427 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:38.427 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:38.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.427 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:38.427 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.427 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:38.427 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:38.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.427 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:38.427 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.427 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:38.427 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:38.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.427 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:38.427 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:38.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.427 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:38.427 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:38.427 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:38.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.427 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:38.427 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.427 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.427 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:38.427 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:38.427 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.428 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:38.428 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.428 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.428 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:38.428 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:38.428 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.428 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:38.428 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.428 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.428 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:38.428 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:38.428 EAL: Ask a virtual area of 0x61000 bytes 00:05:38.428 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:38.428 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:38.428 EAL: Ask a virtual area of 0x400000000 bytes 00:05:38.428 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:38.428 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:38.428 EAL: Hugepages will be freed exactly as allocated. 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: TSC frequency is ~2700000 KHz 00:05:38.428 EAL: Main lcore 0 is ready (tid=7fd9d5f54a00;cpuset=[0]) 00:05:38.428 EAL: Trying to obtain current memory policy. 00:05:38.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.428 EAL: Restoring previous memory policy: 0 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was expanded by 2MB 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:38.428 EAL: Mem event callback 'spdk:(nil)' registered 00:05:38.428 00:05:38.428 00:05:38.428 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.428 http://cunit.sourceforge.net/ 00:05:38.428 00:05:38.428 00:05:38.428 Suite: components_suite 00:05:38.428 Test: vtophys_malloc_test ...passed 00:05:38.428 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:38.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.428 EAL: Restoring previous memory policy: 4 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was expanded by 4MB 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was shrunk by 4MB 00:05:38.428 EAL: Trying to obtain current memory policy. 00:05:38.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.428 EAL: Restoring previous memory policy: 4 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was expanded by 6MB 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was shrunk by 6MB 00:05:38.428 EAL: Trying to obtain current memory policy. 00:05:38.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.428 EAL: Restoring previous memory policy: 4 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was expanded by 10MB 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was shrunk by 10MB 00:05:38.428 EAL: Trying to obtain current memory policy. 00:05:38.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.428 EAL: Restoring previous memory policy: 4 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was expanded by 18MB 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was shrunk by 18MB 00:05:38.428 EAL: Trying to obtain current memory policy. 00:05:38.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.428 EAL: Restoring previous memory policy: 4 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was expanded by 34MB 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was shrunk by 34MB 00:05:38.428 EAL: Trying to obtain current memory policy. 00:05:38.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.428 EAL: Restoring previous memory policy: 4 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was expanded by 66MB 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was shrunk by 66MB 00:05:38.428 EAL: Trying to obtain current memory policy. 00:05:38.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.428 EAL: Restoring previous memory policy: 4 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was expanded by 130MB 00:05:38.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.428 EAL: request: mp_malloc_sync 00:05:38.428 EAL: No shared files mode enabled, IPC is disabled 00:05:38.428 EAL: Heap on socket 0 was shrunk by 130MB 00:05:38.428 EAL: Trying to obtain current memory policy. 00:05:38.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.687 EAL: Restoring previous memory policy: 4 00:05:38.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.687 EAL: request: mp_malloc_sync 00:05:38.687 EAL: No shared files mode enabled, IPC is disabled 00:05:38.687 EAL: Heap on socket 0 was expanded by 258MB 00:05:38.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.687 EAL: request: mp_malloc_sync 00:05:38.687 EAL: No shared files mode enabled, IPC is disabled 00:05:38.687 EAL: Heap on socket 0 was shrunk by 258MB 00:05:38.687 EAL: Trying to obtain current memory policy. 00:05:38.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.945 EAL: Restoring previous memory policy: 4 00:05:38.945 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.945 EAL: request: mp_malloc_sync 00:05:38.945 EAL: No shared files mode enabled, IPC is disabled 00:05:38.945 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.945 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.203 EAL: request: mp_malloc_sync 00:05:39.203 EAL: No shared files mode enabled, IPC is disabled 00:05:39.203 EAL: Heap on socket 0 was shrunk by 514MB 00:05:39.203 EAL: Trying to obtain current memory policy. 00:05:39.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.461 EAL: Restoring previous memory policy: 4 00:05:39.461 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.461 EAL: request: mp_malloc_sync 00:05:39.461 EAL: No shared files mode enabled, IPC is disabled 00:05:39.461 EAL: Heap on socket 0 was expanded by 1026MB 00:05:39.461 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.719 EAL: request: mp_malloc_sync 00:05:39.719 EAL: No shared files mode enabled, IPC is disabled 00:05:39.719 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:39.719 passed 00:05:39.719 00:05:39.719 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.719 suites 1 1 n/a 0 0 00:05:39.719 tests 2 2 2 0 0 00:05:39.719 asserts 497 497 497 0 n/a 00:05:39.719 00:05:39.719 Elapsed time = 1.361 seconds 00:05:39.719 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.719 EAL: request: mp_malloc_sync 00:05:39.719 EAL: No shared files mode enabled, IPC is disabled 00:05:39.719 EAL: Heap on socket 0 was shrunk by 2MB 00:05:39.719 EAL: No shared files mode enabled, IPC is disabled 00:05:39.719 EAL: No shared files mode enabled, IPC is disabled 00:05:39.719 EAL: No shared files mode enabled, IPC is disabled 00:05:39.719 00:05:39.719 real 0m1.474s 00:05:39.719 user 0m0.835s 00:05:39.719 sys 0m0.609s 00:05:39.719 10:35:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.719 10:35:56 -- common/autotest_common.sh@10 -- # set +x 00:05:39.719 ************************************ 00:05:39.719 END TEST env_vtophys 00:05:39.719 ************************************ 00:05:39.719 10:35:56 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:39.720 10:35:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.720 10:35:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.720 10:35:56 -- common/autotest_common.sh@10 -- # set +x 00:05:39.720 ************************************ 00:05:39.720 START TEST env_pci 00:05:39.720 ************************************ 00:05:39.720 10:35:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:39.720 00:05:39.720 00:05:39.720 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.720 http://cunit.sourceforge.net/ 00:05:39.720 00:05:39.720 00:05:39.720 Suite: pci 00:05:39.720 Test: pci_hook ...[2024-07-10 10:35:56.520247] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3332251 has claimed it 00:05:39.978 EAL: Cannot find device (10000:00:01.0) 00:05:39.978 EAL: Failed to attach device on primary process 00:05:39.978 passed 00:05:39.978 00:05:39.978 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.978 suites 1 1 n/a 0 0 00:05:39.978 tests 1 1 1 0 0 00:05:39.978 asserts 25 25 25 0 n/a 00:05:39.978 00:05:39.978 Elapsed time = 0.021 seconds 00:05:39.978 00:05:39.978 real 0m0.034s 00:05:39.978 user 0m0.008s 00:05:39.978 sys 0m0.026s 00:05:39.978 10:35:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.978 10:35:56 -- common/autotest_common.sh@10 -- # set +x 00:05:39.978 ************************************ 00:05:39.978 END TEST env_pci 00:05:39.978 ************************************ 00:05:39.978 10:35:56 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:39.978 10:35:56 -- env/env.sh@15 -- # uname 00:05:39.978 10:35:56 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:39.978 10:35:56 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:39.978 10:35:56 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.978 10:35:56 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:39.978 10:35:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.978 10:35:56 -- common/autotest_common.sh@10 -- # set +x 00:05:39.978 ************************************ 00:05:39.978 START TEST env_dpdk_post_init 00:05:39.978 ************************************ 00:05:39.978 10:35:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.978 EAL: Detected CPU lcores: 48 00:05:39.978 EAL: Detected NUMA nodes: 2 00:05:39.978 EAL: Detected shared linkage of DPDK 00:05:39.978 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.978 EAL: Selected IOVA mode 'VA' 00:05:39.978 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.978 EAL: VFIO support initialized 00:05:39.978 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.978 EAL: Using IOMMU type 1 (Type 1) 00:05:39.978 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:39.978 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:39.978 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:39.978 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:39.978 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:39.978 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:39.978 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:39.978 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:39.978 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:39.978 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:40.237 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:40.237 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:40.237 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:40.237 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:40.237 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:40.237 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:40.803 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:44.170 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:44.171 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:44.171 Starting DPDK initialization... 00:05:44.171 Starting SPDK post initialization... 00:05:44.171 SPDK NVMe probe 00:05:44.171 Attaching to 0000:88:00.0 00:05:44.171 Attached to 0000:88:00.0 00:05:44.171 Cleaning up... 00:05:44.171 00:05:44.171 real 0m4.379s 00:05:44.171 user 0m3.253s 00:05:44.171 sys 0m0.183s 00:05:44.171 10:36:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.171 10:36:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.171 ************************************ 00:05:44.171 END TEST env_dpdk_post_init 00:05:44.171 ************************************ 00:05:44.171 10:36:00 -- env/env.sh@26 -- # uname 00:05:44.171 10:36:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:44.171 10:36:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.171 10:36:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.171 10:36:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.171 10:36:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.171 ************************************ 00:05:44.171 START TEST env_mem_callbacks 00:05:44.171 ************************************ 00:05:44.171 10:36:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.171 EAL: Detected CPU lcores: 48 00:05:44.171 EAL: Detected NUMA nodes: 2 00:05:44.171 EAL: Detected shared linkage of DPDK 00:05:44.171 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.429 EAL: Selected IOVA mode 'VA' 00:05:44.429 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.429 EAL: VFIO support initialized 00:05:44.429 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.429 00:05:44.429 00:05:44.429 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.429 http://cunit.sourceforge.net/ 00:05:44.429 00:05:44.429 00:05:44.429 Suite: memory 00:05:44.429 Test: test ... 00:05:44.429 register 0x200000200000 2097152 00:05:44.429 malloc 3145728 00:05:44.429 register 0x200000400000 4194304 00:05:44.429 buf 0x200000500000 len 3145728 PASSED 00:05:44.429 malloc 64 00:05:44.429 buf 0x2000004fff40 len 64 PASSED 00:05:44.429 malloc 4194304 00:05:44.429 register 0x200000800000 6291456 00:05:44.429 buf 0x200000a00000 len 4194304 PASSED 00:05:44.429 free 0x200000500000 3145728 00:05:44.429 free 0x2000004fff40 64 00:05:44.429 unregister 0x200000400000 4194304 PASSED 00:05:44.429 free 0x200000a00000 4194304 00:05:44.429 unregister 0x200000800000 6291456 PASSED 00:05:44.429 malloc 8388608 00:05:44.429 register 0x200000400000 10485760 00:05:44.429 buf 0x200000600000 len 8388608 PASSED 00:05:44.429 free 0x200000600000 8388608 00:05:44.429 unregister 0x200000400000 10485760 PASSED 00:05:44.429 passed 00:05:44.429 00:05:44.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.429 suites 1 1 n/a 0 0 00:05:44.429 tests 1 1 1 0 0 00:05:44.429 asserts 15 15 15 0 n/a 00:05:44.429 00:05:44.429 Elapsed time = 0.005 seconds 00:05:44.429 00:05:44.429 real 0m0.046s 00:05:44.429 user 0m0.010s 00:05:44.429 sys 0m0.036s 00:05:44.429 10:36:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.429 10:36:01 -- common/autotest_common.sh@10 -- # set +x 00:05:44.429 ************************************ 00:05:44.429 END TEST env_mem_callbacks 00:05:44.429 ************************************ 00:05:44.429 00:05:44.429 real 0m6.257s 00:05:44.429 user 0m4.324s 00:05:44.429 sys 0m0.985s 00:05:44.429 10:36:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.429 10:36:01 -- common/autotest_common.sh@10 -- # set +x 00:05:44.429 ************************************ 00:05:44.429 END TEST env 00:05:44.429 ************************************ 00:05:44.429 10:36:01 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.429 10:36:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.429 10:36:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.429 10:36:01 -- common/autotest_common.sh@10 -- # set +x 00:05:44.429 ************************************ 00:05:44.429 START TEST rpc 00:05:44.429 ************************************ 00:05:44.429 10:36:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.429 * Looking for test storage... 00:05:44.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:44.429 10:36:01 -- rpc/rpc.sh@65 -- # spdk_pid=3332982 00:05:44.429 10:36:01 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:44.429 10:36:01 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.429 10:36:01 -- rpc/rpc.sh@67 -- # waitforlisten 3332982 00:05:44.429 10:36:01 -- common/autotest_common.sh@819 -- # '[' -z 3332982 ']' 00:05:44.429 10:36:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.429 10:36:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.429 10:36:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.429 10:36:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.429 10:36:01 -- common/autotest_common.sh@10 -- # set +x 00:05:44.429 [2024-07-10 10:36:01.164750] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:44.429 [2024-07-10 10:36:01.164871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332982 ] 00:05:44.429 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.429 [2024-07-10 10:36:01.229901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.687 [2024-07-10 10:36:01.321614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.687 [2024-07-10 10:36:01.321784] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.687 [2024-07-10 10:36:01.321803] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3332982' to capture a snapshot of events at runtime. 00:05:44.687 [2024-07-10 10:36:01.321817] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3332982 for offline analysis/debug. 00:05:44.687 [2024-07-10 10:36:01.321845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.621 10:36:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.621 10:36:02 -- common/autotest_common.sh@852 -- # return 0 00:05:45.621 10:36:02 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.622 10:36:02 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.622 10:36:02 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.622 10:36:02 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.622 10:36:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.622 10:36:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 ************************************ 00:05:45.622 START TEST rpc_integrity 00:05:45.622 ************************************ 00:05:45.622 10:36:02 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:45.622 10:36:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.622 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.622 10:36:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.622 10:36:02 -- rpc/rpc.sh@13 -- # jq length 00:05:45.622 10:36:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.622 10:36:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.622 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.622 10:36:02 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.622 10:36:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.622 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.622 10:36:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.622 { 00:05:45.622 "name": "Malloc0", 00:05:45.622 "aliases": [ 00:05:45.622 "d3a3211f-f989-46b8-ac0f-e94c2004ac71" 00:05:45.622 ], 00:05:45.622 "product_name": "Malloc disk", 00:05:45.622 "block_size": 512, 00:05:45.622 "num_blocks": 16384, 00:05:45.622 "uuid": "d3a3211f-f989-46b8-ac0f-e94c2004ac71", 00:05:45.622 "assigned_rate_limits": { 00:05:45.622 "rw_ios_per_sec": 0, 00:05:45.622 "rw_mbytes_per_sec": 0, 00:05:45.622 "r_mbytes_per_sec": 0, 00:05:45.622 "w_mbytes_per_sec": 0 00:05:45.622 }, 00:05:45.622 "claimed": false, 00:05:45.622 "zoned": false, 00:05:45.622 "supported_io_types": { 00:05:45.622 "read": true, 00:05:45.622 "write": true, 00:05:45.622 "unmap": true, 00:05:45.622 "write_zeroes": true, 00:05:45.622 "flush": true, 00:05:45.622 "reset": true, 00:05:45.622 "compare": false, 00:05:45.622 "compare_and_write": false, 00:05:45.622 "abort": true, 00:05:45.622 "nvme_admin": false, 00:05:45.622 "nvme_io": false 00:05:45.622 }, 00:05:45.622 "memory_domains": [ 00:05:45.622 { 00:05:45.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.622 "dma_device_type": 2 00:05:45.622 } 00:05:45.622 ], 00:05:45.622 "driver_specific": {} 00:05:45.622 } 00:05:45.622 ]' 00:05:45.622 10:36:02 -- rpc/rpc.sh@17 -- # jq length 00:05:45.622 10:36:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.622 10:36:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.622 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 [2024-07-10 10:36:02.210007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.622 [2024-07-10 10:36:02.210060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.622 [2024-07-10 10:36:02.210084] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfb55b0 00:05:45.622 [2024-07-10 10:36:02.210099] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.622 [2024-07-10 10:36:02.211577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.622 [2024-07-10 10:36:02.211605] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.622 Passthru0 00:05:45.622 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.622 10:36:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.622 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.622 10:36:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.622 { 00:05:45.622 "name": "Malloc0", 00:05:45.622 "aliases": [ 00:05:45.622 "d3a3211f-f989-46b8-ac0f-e94c2004ac71" 00:05:45.622 ], 00:05:45.622 "product_name": "Malloc disk", 00:05:45.622 "block_size": 512, 00:05:45.622 "num_blocks": 16384, 00:05:45.622 "uuid": "d3a3211f-f989-46b8-ac0f-e94c2004ac71", 00:05:45.622 "assigned_rate_limits": { 00:05:45.622 "rw_ios_per_sec": 0, 00:05:45.622 "rw_mbytes_per_sec": 0, 00:05:45.622 "r_mbytes_per_sec": 0, 00:05:45.622 "w_mbytes_per_sec": 0 00:05:45.622 }, 00:05:45.622 "claimed": true, 00:05:45.622 "claim_type": "exclusive_write", 00:05:45.622 "zoned": false, 00:05:45.622 "supported_io_types": { 00:05:45.622 "read": true, 00:05:45.622 "write": true, 00:05:45.622 "unmap": true, 00:05:45.622 "write_zeroes": true, 00:05:45.622 "flush": true, 00:05:45.622 "reset": true, 00:05:45.622 "compare": false, 00:05:45.622 "compare_and_write": false, 00:05:45.622 "abort": true, 00:05:45.622 "nvme_admin": false, 00:05:45.622 "nvme_io": false 00:05:45.622 }, 00:05:45.622 "memory_domains": [ 00:05:45.622 { 00:05:45.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.622 "dma_device_type": 2 00:05:45.622 } 00:05:45.622 ], 00:05:45.622 "driver_specific": {} 00:05:45.622 }, 00:05:45.622 { 00:05:45.622 "name": "Passthru0", 00:05:45.622 "aliases": [ 00:05:45.622 "2a88d67c-1bdc-583e-b359-abc03b4638bc" 00:05:45.622 ], 00:05:45.622 "product_name": "passthru", 00:05:45.622 "block_size": 512, 00:05:45.622 "num_blocks": 16384, 00:05:45.622 "uuid": "2a88d67c-1bdc-583e-b359-abc03b4638bc", 00:05:45.622 "assigned_rate_limits": { 00:05:45.622 "rw_ios_per_sec": 0, 00:05:45.622 "rw_mbytes_per_sec": 0, 00:05:45.622 "r_mbytes_per_sec": 0, 00:05:45.622 "w_mbytes_per_sec": 0 00:05:45.622 }, 00:05:45.622 "claimed": false, 00:05:45.622 "zoned": false, 00:05:45.622 "supported_io_types": { 00:05:45.622 "read": true, 00:05:45.622 "write": true, 00:05:45.622 "unmap": true, 00:05:45.622 "write_zeroes": true, 00:05:45.622 "flush": true, 00:05:45.622 "reset": true, 00:05:45.622 "compare": false, 00:05:45.622 "compare_and_write": false, 00:05:45.622 "abort": true, 00:05:45.622 "nvme_admin": false, 00:05:45.622 "nvme_io": false 00:05:45.622 }, 00:05:45.622 "memory_domains": [ 00:05:45.622 { 00:05:45.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.622 "dma_device_type": 2 00:05:45.622 } 00:05:45.622 ], 00:05:45.622 "driver_specific": { 00:05:45.622 "passthru": { 00:05:45.622 "name": "Passthru0", 00:05:45.622 "base_bdev_name": "Malloc0" 00:05:45.622 } 00:05:45.622 } 00:05:45.622 } 00:05:45.622 ]' 00:05:45.622 10:36:02 -- rpc/rpc.sh@21 -- # jq length 00:05:45.622 10:36:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.622 10:36:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.622 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.622 10:36:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.622 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.622 10:36:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.622 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.622 10:36:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.622 10:36:02 -- rpc/rpc.sh@26 -- # jq length 00:05:45.622 10:36:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.622 00:05:45.622 real 0m0.227s 00:05:45.622 user 0m0.142s 00:05:45.622 sys 0m0.024s 00:05:45.622 10:36:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 ************************************ 00:05:45.622 END TEST rpc_integrity 00:05:45.622 ************************************ 00:05:45.622 10:36:02 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.622 10:36:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.622 10:36:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 ************************************ 00:05:45.622 START TEST rpc_plugins 00:05:45.622 ************************************ 00:05:45.622 10:36:02 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:45.622 10:36:02 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.622 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.622 10:36:02 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.622 10:36:02 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.622 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.622 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.622 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.622 10:36:02 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.622 { 00:05:45.622 "name": "Malloc1", 00:05:45.622 "aliases": [ 00:05:45.622 "de1e7586-d2e2-4fa9-9e30-78d407833a65" 00:05:45.622 ], 00:05:45.622 "product_name": "Malloc disk", 00:05:45.622 "block_size": 4096, 00:05:45.622 "num_blocks": 256, 00:05:45.622 "uuid": "de1e7586-d2e2-4fa9-9e30-78d407833a65", 00:05:45.622 "assigned_rate_limits": { 00:05:45.622 "rw_ios_per_sec": 0, 00:05:45.622 "rw_mbytes_per_sec": 0, 00:05:45.622 "r_mbytes_per_sec": 0, 00:05:45.622 "w_mbytes_per_sec": 0 00:05:45.622 }, 00:05:45.622 "claimed": false, 00:05:45.622 "zoned": false, 00:05:45.622 "supported_io_types": { 00:05:45.622 "read": true, 00:05:45.622 "write": true, 00:05:45.622 "unmap": true, 00:05:45.622 "write_zeroes": true, 00:05:45.622 "flush": true, 00:05:45.622 "reset": true, 00:05:45.623 "compare": false, 00:05:45.623 "compare_and_write": false, 00:05:45.623 "abort": true, 00:05:45.623 "nvme_admin": false, 00:05:45.623 "nvme_io": false 00:05:45.623 }, 00:05:45.623 "memory_domains": [ 00:05:45.623 { 00:05:45.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.623 "dma_device_type": 2 00:05:45.623 } 00:05:45.623 ], 00:05:45.623 "driver_specific": {} 00:05:45.623 } 00:05:45.623 ]' 00:05:45.623 10:36:02 -- rpc/rpc.sh@32 -- # jq length 00:05:45.623 10:36:02 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.623 10:36:02 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.623 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.623 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.623 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.623 10:36:02 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.623 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.623 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.623 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.623 10:36:02 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.623 10:36:02 -- rpc/rpc.sh@36 -- # jq length 00:05:45.880 10:36:02 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.880 00:05:45.880 real 0m0.112s 00:05:45.880 user 0m0.072s 00:05:45.880 sys 0m0.008s 00:05:45.880 10:36:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.880 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.880 ************************************ 00:05:45.880 END TEST rpc_plugins 00:05:45.880 ************************************ 00:05:45.880 10:36:02 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.880 10:36:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.880 10:36:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.880 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.880 ************************************ 00:05:45.880 START TEST rpc_trace_cmd_test 00:05:45.880 ************************************ 00:05:45.880 10:36:02 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:45.880 10:36:02 -- rpc/rpc.sh@40 -- # local info 00:05:45.880 10:36:02 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.880 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.880 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.880 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.881 10:36:02 -- rpc/rpc.sh@42 -- # info='{ 00:05:45.881 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3332982", 00:05:45.881 "tpoint_group_mask": "0x8", 00:05:45.881 "iscsi_conn": { 00:05:45.881 "mask": "0x2", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 }, 00:05:45.881 "scsi": { 00:05:45.881 "mask": "0x4", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 }, 00:05:45.881 "bdev": { 00:05:45.881 "mask": "0x8", 00:05:45.881 "tpoint_mask": "0xffffffffffffffff" 00:05:45.881 }, 00:05:45.881 "nvmf_rdma": { 00:05:45.881 "mask": "0x10", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 }, 00:05:45.881 "nvmf_tcp": { 00:05:45.881 "mask": "0x20", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 }, 00:05:45.881 "ftl": { 00:05:45.881 "mask": "0x40", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 }, 00:05:45.881 "blobfs": { 00:05:45.881 "mask": "0x80", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 }, 00:05:45.881 "dsa": { 00:05:45.881 "mask": "0x200", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 }, 00:05:45.881 "thread": { 00:05:45.881 "mask": "0x400", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 }, 00:05:45.881 "nvme_pcie": { 00:05:45.881 "mask": "0x800", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 }, 00:05:45.881 "iaa": { 00:05:45.881 "mask": "0x1000", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 }, 00:05:45.881 "nvme_tcp": { 00:05:45.881 "mask": "0x2000", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 }, 00:05:45.881 "bdev_nvme": { 00:05:45.881 "mask": "0x4000", 00:05:45.881 "tpoint_mask": "0x0" 00:05:45.881 } 00:05:45.881 }' 00:05:45.881 10:36:02 -- rpc/rpc.sh@43 -- # jq length 00:05:45.881 10:36:02 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:45.881 10:36:02 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:45.881 10:36:02 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:45.881 10:36:02 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:45.881 10:36:02 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:45.881 10:36:02 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:45.881 10:36:02 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:45.881 10:36:02 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:45.881 10:36:02 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:45.881 00:05:45.881 real 0m0.192s 00:05:45.881 user 0m0.171s 00:05:45.881 sys 0m0.014s 00:05:45.881 10:36:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.881 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:45.881 ************************************ 00:05:45.881 END TEST rpc_trace_cmd_test 00:05:45.881 ************************************ 00:05:46.139 10:36:02 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.139 10:36:02 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.139 10:36:02 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.139 10:36:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.139 10:36:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.139 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.139 ************************************ 00:05:46.139 START TEST rpc_daemon_integrity 00:05:46.139 ************************************ 00:05:46.139 10:36:02 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:46.139 10:36:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.139 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.139 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.139 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.139 10:36:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.139 10:36:02 -- rpc/rpc.sh@13 -- # jq length 00:05:46.139 10:36:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.139 10:36:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.139 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.139 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.139 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.139 10:36:02 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.139 10:36:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.139 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.139 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.139 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.139 10:36:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.139 { 00:05:46.139 "name": "Malloc2", 00:05:46.139 "aliases": [ 00:05:46.139 "593df68d-f09e-4f5e-b174-4ffbec765104" 00:05:46.139 ], 00:05:46.139 "product_name": "Malloc disk", 00:05:46.139 "block_size": 512, 00:05:46.139 "num_blocks": 16384, 00:05:46.139 "uuid": "593df68d-f09e-4f5e-b174-4ffbec765104", 00:05:46.139 "assigned_rate_limits": { 00:05:46.139 "rw_ios_per_sec": 0, 00:05:46.139 "rw_mbytes_per_sec": 0, 00:05:46.139 "r_mbytes_per_sec": 0, 00:05:46.139 "w_mbytes_per_sec": 0 00:05:46.139 }, 00:05:46.139 "claimed": false, 00:05:46.139 "zoned": false, 00:05:46.139 "supported_io_types": { 00:05:46.139 "read": true, 00:05:46.139 "write": true, 00:05:46.139 "unmap": true, 00:05:46.139 "write_zeroes": true, 00:05:46.139 "flush": true, 00:05:46.139 "reset": true, 00:05:46.139 "compare": false, 00:05:46.139 "compare_and_write": false, 00:05:46.139 "abort": true, 00:05:46.139 "nvme_admin": false, 00:05:46.139 "nvme_io": false 00:05:46.139 }, 00:05:46.139 "memory_domains": [ 00:05:46.139 { 00:05:46.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.139 "dma_device_type": 2 00:05:46.139 } 00:05:46.139 ], 00:05:46.139 "driver_specific": {} 00:05:46.139 } 00:05:46.139 ]' 00:05:46.139 10:36:02 -- rpc/rpc.sh@17 -- # jq length 00:05:46.139 10:36:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.139 10:36:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.139 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.139 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.139 [2024-07-10 10:36:02.815741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.139 [2024-07-10 10:36:02.815803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.139 [2024-07-10 10:36:02.815826] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10766f0 00:05:46.139 [2024-07-10 10:36:02.815841] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.139 [2024-07-10 10:36:02.817177] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.139 [2024-07-10 10:36:02.817207] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.139 Passthru0 00:05:46.139 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.139 10:36:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.139 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.139 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.139 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.139 10:36:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.139 { 00:05:46.139 "name": "Malloc2", 00:05:46.139 "aliases": [ 00:05:46.139 "593df68d-f09e-4f5e-b174-4ffbec765104" 00:05:46.139 ], 00:05:46.139 "product_name": "Malloc disk", 00:05:46.139 "block_size": 512, 00:05:46.139 "num_blocks": 16384, 00:05:46.139 "uuid": "593df68d-f09e-4f5e-b174-4ffbec765104", 00:05:46.139 "assigned_rate_limits": { 00:05:46.139 "rw_ios_per_sec": 0, 00:05:46.139 "rw_mbytes_per_sec": 0, 00:05:46.139 "r_mbytes_per_sec": 0, 00:05:46.139 "w_mbytes_per_sec": 0 00:05:46.139 }, 00:05:46.139 "claimed": true, 00:05:46.139 "claim_type": "exclusive_write", 00:05:46.139 "zoned": false, 00:05:46.139 "supported_io_types": { 00:05:46.139 "read": true, 00:05:46.139 "write": true, 00:05:46.139 "unmap": true, 00:05:46.139 "write_zeroes": true, 00:05:46.139 "flush": true, 00:05:46.139 "reset": true, 00:05:46.139 "compare": false, 00:05:46.139 "compare_and_write": false, 00:05:46.139 "abort": true, 00:05:46.139 "nvme_admin": false, 00:05:46.139 "nvme_io": false 00:05:46.139 }, 00:05:46.139 "memory_domains": [ 00:05:46.139 { 00:05:46.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.139 "dma_device_type": 2 00:05:46.139 } 00:05:46.139 ], 00:05:46.139 "driver_specific": {} 00:05:46.139 }, 00:05:46.139 { 00:05:46.139 "name": "Passthru0", 00:05:46.139 "aliases": [ 00:05:46.139 "3e9f5818-475a-5a19-99a7-cb6c5135ea40" 00:05:46.139 ], 00:05:46.139 "product_name": "passthru", 00:05:46.139 "block_size": 512, 00:05:46.139 "num_blocks": 16384, 00:05:46.139 "uuid": "3e9f5818-475a-5a19-99a7-cb6c5135ea40", 00:05:46.139 "assigned_rate_limits": { 00:05:46.139 "rw_ios_per_sec": 0, 00:05:46.139 "rw_mbytes_per_sec": 0, 00:05:46.139 "r_mbytes_per_sec": 0, 00:05:46.139 "w_mbytes_per_sec": 0 00:05:46.139 }, 00:05:46.139 "claimed": false, 00:05:46.139 "zoned": false, 00:05:46.139 "supported_io_types": { 00:05:46.139 "read": true, 00:05:46.139 "write": true, 00:05:46.139 "unmap": true, 00:05:46.139 "write_zeroes": true, 00:05:46.139 "flush": true, 00:05:46.139 "reset": true, 00:05:46.139 "compare": false, 00:05:46.139 "compare_and_write": false, 00:05:46.139 "abort": true, 00:05:46.139 "nvme_admin": false, 00:05:46.139 "nvme_io": false 00:05:46.139 }, 00:05:46.139 "memory_domains": [ 00:05:46.139 { 00:05:46.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.139 "dma_device_type": 2 00:05:46.139 } 00:05:46.139 ], 00:05:46.139 "driver_specific": { 00:05:46.139 "passthru": { 00:05:46.139 "name": "Passthru0", 00:05:46.139 "base_bdev_name": "Malloc2" 00:05:46.139 } 00:05:46.139 } 00:05:46.139 } 00:05:46.139 ]' 00:05:46.139 10:36:02 -- rpc/rpc.sh@21 -- # jq length 00:05:46.139 10:36:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.139 10:36:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.139 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.139 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.139 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.139 10:36:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.139 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.139 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.139 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.139 10:36:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.139 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.139 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.139 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.139 10:36:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.139 10:36:02 -- rpc/rpc.sh@26 -- # jq length 00:05:46.139 10:36:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.139 00:05:46.139 real 0m0.216s 00:05:46.139 user 0m0.141s 00:05:46.139 sys 0m0.021s 00:05:46.139 10:36:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.139 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.140 ************************************ 00:05:46.140 END TEST rpc_daemon_integrity 00:05:46.140 ************************************ 00:05:46.140 10:36:02 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.140 10:36:02 -- rpc/rpc.sh@84 -- # killprocess 3332982 00:05:46.140 10:36:02 -- common/autotest_common.sh@926 -- # '[' -z 3332982 ']' 00:05:46.140 10:36:02 -- common/autotest_common.sh@930 -- # kill -0 3332982 00:05:46.140 10:36:02 -- common/autotest_common.sh@931 -- # uname 00:05:46.140 10:36:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:46.140 10:36:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3332982 00:05:46.398 10:36:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:46.398 10:36:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:46.398 10:36:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3332982' 00:05:46.398 killing process with pid 3332982 00:05:46.398 10:36:02 -- common/autotest_common.sh@945 -- # kill 3332982 00:05:46.398 10:36:02 -- common/autotest_common.sh@950 -- # wait 3332982 00:05:46.656 00:05:46.656 real 0m2.310s 00:05:46.656 user 0m2.941s 00:05:46.656 sys 0m0.559s 00:05:46.656 10:36:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.656 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:46.656 ************************************ 00:05:46.656 END TEST rpc 00:05:46.656 ************************************ 00:05:46.656 10:36:03 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:46.656 10:36:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.656 10:36:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.656 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:46.656 ************************************ 00:05:46.656 START TEST rpc_client 00:05:46.656 ************************************ 00:05:46.656 10:36:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:46.656 * Looking for test storage... 00:05:46.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:46.656 10:36:03 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:46.656 OK 00:05:46.656 10:36:03 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:46.656 00:05:46.656 real 0m0.059s 00:05:46.656 user 0m0.022s 00:05:46.656 sys 0m0.041s 00:05:46.656 10:36:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.656 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:46.656 ************************************ 00:05:46.656 END TEST rpc_client 00:05:46.656 ************************************ 00:05:46.914 10:36:03 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:46.914 10:36:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.914 10:36:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.914 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:46.914 ************************************ 00:05:46.914 START TEST json_config 00:05:46.914 ************************************ 00:05:46.915 10:36:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:46.915 10:36:03 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.915 10:36:03 -- nvmf/common.sh@7 -- # uname -s 00:05:46.915 10:36:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.915 10:36:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.915 10:36:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.915 10:36:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.915 10:36:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.915 10:36:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.915 10:36:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.915 10:36:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.915 10:36:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.915 10:36:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.915 10:36:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:46.915 10:36:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:46.915 10:36:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.915 10:36:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.915 10:36:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.915 10:36:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.915 10:36:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.915 10:36:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.915 10:36:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.915 10:36:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.915 10:36:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.915 10:36:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.915 10:36:03 -- paths/export.sh@5 -- # export PATH 00:05:46.915 10:36:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.915 10:36:03 -- nvmf/common.sh@46 -- # : 0 00:05:46.915 10:36:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:46.915 10:36:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:46.915 10:36:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:46.915 10:36:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.915 10:36:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.915 10:36:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:46.915 10:36:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:46.915 10:36:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:46.915 10:36:03 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:46.915 10:36:03 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:46.915 10:36:03 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:46.915 10:36:03 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:46.915 10:36:03 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:46.915 10:36:03 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:46.915 10:36:03 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:46.915 10:36:03 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:46.915 10:36:03 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:46.915 10:36:03 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:46.915 10:36:03 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:46.915 10:36:03 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:46.915 10:36:03 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:46.915 10:36:03 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.915 10:36:03 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:46.915 INFO: JSON configuration test init 00:05:46.915 10:36:03 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:46.915 10:36:03 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:46.915 10:36:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.915 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:46.915 10:36:03 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:46.915 10:36:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.915 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:46.915 10:36:03 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:46.915 10:36:03 -- json_config/json_config.sh@98 -- # local app=target 00:05:46.915 10:36:03 -- json_config/json_config.sh@99 -- # shift 00:05:46.915 10:36:03 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:46.915 10:36:03 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:46.915 10:36:03 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:46.915 10:36:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:46.915 10:36:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:46.915 10:36:03 -- json_config/json_config.sh@111 -- # app_pid[$app]=3333514 00:05:46.915 10:36:03 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:46.915 10:36:03 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:46.915 Waiting for target to run... 00:05:46.915 10:36:03 -- json_config/json_config.sh@114 -- # waitforlisten 3333514 /var/tmp/spdk_tgt.sock 00:05:46.915 10:36:03 -- common/autotest_common.sh@819 -- # '[' -z 3333514 ']' 00:05:46.915 10:36:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.915 10:36:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.915 10:36:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.915 10:36:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.915 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:46.915 [2024-07-10 10:36:03.591314] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:46.915 [2024-07-10 10:36:03.591392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333514 ] 00:05:46.915 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.174 [2024-07-10 10:36:03.936766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.431 [2024-07-10 10:36:03.998531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.431 [2024-07-10 10:36:03.998717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.689 10:36:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.689 10:36:04 -- common/autotest_common.sh@852 -- # return 0 00:05:47.689 10:36:04 -- json_config/json_config.sh@115 -- # echo '' 00:05:47.689 00:05:47.689 10:36:04 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:47.689 10:36:04 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:47.689 10:36:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:47.689 10:36:04 -- common/autotest_common.sh@10 -- # set +x 00:05:47.689 10:36:04 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:47.689 10:36:04 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:47.689 10:36:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.689 10:36:04 -- common/autotest_common.sh@10 -- # set +x 00:05:47.947 10:36:04 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:47.947 10:36:04 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:47.947 10:36:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:51.226 10:36:07 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:51.226 10:36:07 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:51.226 10:36:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:51.226 10:36:07 -- common/autotest_common.sh@10 -- # set +x 00:05:51.226 10:36:07 -- json_config/json_config.sh@48 -- # local ret=0 00:05:51.226 10:36:07 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:51.226 10:36:07 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:51.226 10:36:07 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:51.226 10:36:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:51.226 10:36:07 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:51.226 10:36:07 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:51.226 10:36:07 -- json_config/json_config.sh@51 -- # local get_types 00:05:51.226 10:36:07 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:51.226 10:36:07 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:51.226 10:36:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:51.226 10:36:07 -- common/autotest_common.sh@10 -- # set +x 00:05:51.226 10:36:07 -- json_config/json_config.sh@58 -- # return 0 00:05:51.226 10:36:07 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:51.226 10:36:07 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:51.226 10:36:07 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:51.226 10:36:07 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:51.226 10:36:07 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:51.226 10:36:07 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:51.226 10:36:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:51.226 10:36:07 -- common/autotest_common.sh@10 -- # set +x 00:05:51.226 10:36:07 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:51.226 10:36:07 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:51.226 10:36:07 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:51.226 10:36:07 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:51.226 10:36:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:51.484 MallocForNvmf0 00:05:51.484 10:36:08 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:51.484 10:36:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:51.741 MallocForNvmf1 00:05:51.741 10:36:08 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:51.741 10:36:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:52.000 [2024-07-10 10:36:08.645924] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.000 10:36:08 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:52.000 10:36:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:52.258 10:36:08 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:52.258 10:36:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:52.515 10:36:09 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:52.515 10:36:09 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:52.773 10:36:09 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:52.773 10:36:09 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:53.031 [2024-07-10 10:36:09.621149] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:53.031 10:36:09 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:53.031 10:36:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.031 10:36:09 -- common/autotest_common.sh@10 -- # set +x 00:05:53.031 10:36:09 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:53.031 10:36:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.031 10:36:09 -- common/autotest_common.sh@10 -- # set +x 00:05:53.031 10:36:09 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:53.031 10:36:09 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:53.031 10:36:09 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:53.290 MallocBdevForConfigChangeCheck 00:05:53.290 10:36:09 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:53.290 10:36:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.290 10:36:09 -- common/autotest_common.sh@10 -- # set +x 00:05:53.290 10:36:09 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:53.290 10:36:09 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.548 10:36:10 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:53.548 INFO: shutting down applications... 00:05:53.548 10:36:10 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:53.548 10:36:10 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:53.548 10:36:10 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:53.548 10:36:10 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:55.448 Calling clear_iscsi_subsystem 00:05:55.448 Calling clear_nvmf_subsystem 00:05:55.448 Calling clear_nbd_subsystem 00:05:55.448 Calling clear_ublk_subsystem 00:05:55.448 Calling clear_vhost_blk_subsystem 00:05:55.448 Calling clear_vhost_scsi_subsystem 00:05:55.448 Calling clear_scheduler_subsystem 00:05:55.448 Calling clear_bdev_subsystem 00:05:55.448 Calling clear_accel_subsystem 00:05:55.448 Calling clear_vmd_subsystem 00:05:55.448 Calling clear_sock_subsystem 00:05:55.448 Calling clear_iobuf_subsystem 00:05:55.448 10:36:11 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:55.448 10:36:11 -- json_config/json_config.sh@396 -- # count=100 00:05:55.448 10:36:11 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:55.448 10:36:11 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.448 10:36:11 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:55.448 10:36:11 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:55.706 10:36:12 -- json_config/json_config.sh@398 -- # break 00:05:55.706 10:36:12 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:55.706 10:36:12 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:55.706 10:36:12 -- json_config/json_config.sh@120 -- # local app=target 00:05:55.706 10:36:12 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:55.706 10:36:12 -- json_config/json_config.sh@124 -- # [[ -n 3333514 ]] 00:05:55.706 10:36:12 -- json_config/json_config.sh@127 -- # kill -SIGINT 3333514 00:05:55.706 10:36:12 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:55.706 10:36:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:55.706 10:36:12 -- json_config/json_config.sh@130 -- # kill -0 3333514 00:05:55.706 10:36:12 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:56.274 10:36:12 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:56.274 10:36:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:56.274 10:36:12 -- json_config/json_config.sh@130 -- # kill -0 3333514 00:05:56.274 10:36:12 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:56.274 10:36:12 -- json_config/json_config.sh@132 -- # break 00:05:56.274 10:36:12 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:56.274 10:36:12 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:56.274 SPDK target shutdown done 00:05:56.274 10:36:12 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:56.274 INFO: relaunching applications... 00:05:56.274 10:36:12 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.274 10:36:12 -- json_config/json_config.sh@98 -- # local app=target 00:05:56.274 10:36:12 -- json_config/json_config.sh@99 -- # shift 00:05:56.274 10:36:12 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:56.274 10:36:12 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:56.274 10:36:12 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:56.274 10:36:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:56.274 10:36:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:56.274 10:36:12 -- json_config/json_config.sh@111 -- # app_pid[$app]=3335239 00:05:56.274 10:36:12 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.274 10:36:12 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:56.274 Waiting for target to run... 00:05:56.274 10:36:12 -- json_config/json_config.sh@114 -- # waitforlisten 3335239 /var/tmp/spdk_tgt.sock 00:05:56.274 10:36:12 -- common/autotest_common.sh@819 -- # '[' -z 3335239 ']' 00:05:56.274 10:36:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.274 10:36:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.274 10:36:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.274 10:36:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.274 10:36:12 -- common/autotest_common.sh@10 -- # set +x 00:05:56.274 [2024-07-10 10:36:12.851041] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:56.274 [2024-07-10 10:36:12.851135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335239 ] 00:05:56.274 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.530 [2024-07-10 10:36:13.352643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.788 [2024-07-10 10:36:13.425973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.788 [2024-07-10 10:36:13.426171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.066 [2024-07-10 10:36:16.453015] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.066 [2024-07-10 10:36:16.485471] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:00.066 10:36:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:00.066 10:36:16 -- common/autotest_common.sh@852 -- # return 0 00:06:00.066 10:36:16 -- json_config/json_config.sh@115 -- # echo '' 00:06:00.066 00:06:00.066 10:36:16 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:00.066 10:36:16 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:00.066 INFO: Checking if target configuration is the same... 00:06:00.066 10:36:16 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.066 10:36:16 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:00.066 10:36:16 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.066 + '[' 2 -ne 2 ']' 00:06:00.066 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:00.066 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:00.066 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:00.066 +++ basename /dev/fd/62 00:06:00.066 ++ mktemp /tmp/62.XXX 00:06:00.066 + tmp_file_1=/tmp/62.Too 00:06:00.066 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.066 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:00.066 + tmp_file_2=/tmp/spdk_tgt_config.json.XBN 00:06:00.066 + ret=0 00:06:00.066 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.323 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.323 + diff -u /tmp/62.Too /tmp/spdk_tgt_config.json.XBN 00:06:00.323 + echo 'INFO: JSON config files are the same' 00:06:00.323 INFO: JSON config files are the same 00:06:00.323 + rm /tmp/62.Too /tmp/spdk_tgt_config.json.XBN 00:06:00.323 + exit 0 00:06:00.323 10:36:17 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:00.323 10:36:17 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:00.323 INFO: changing configuration and checking if this can be detected... 00:06:00.323 10:36:17 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:00.323 10:36:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:00.580 10:36:17 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.580 10:36:17 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:00.580 10:36:17 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.580 + '[' 2 -ne 2 ']' 00:06:00.580 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:00.580 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:00.580 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:00.580 +++ basename /dev/fd/62 00:06:00.580 ++ mktemp /tmp/62.XXX 00:06:00.580 + tmp_file_1=/tmp/62.HCm 00:06:00.580 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.580 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:00.580 + tmp_file_2=/tmp/spdk_tgt_config.json.EGf 00:06:00.580 + ret=0 00:06:00.580 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:01.146 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:01.146 + diff -u /tmp/62.HCm /tmp/spdk_tgt_config.json.EGf 00:06:01.146 + ret=1 00:06:01.146 + echo '=== Start of file: /tmp/62.HCm ===' 00:06:01.146 + cat /tmp/62.HCm 00:06:01.146 + echo '=== End of file: /tmp/62.HCm ===' 00:06:01.146 + echo '' 00:06:01.146 + echo '=== Start of file: /tmp/spdk_tgt_config.json.EGf ===' 00:06:01.146 + cat /tmp/spdk_tgt_config.json.EGf 00:06:01.146 + echo '=== End of file: /tmp/spdk_tgt_config.json.EGf ===' 00:06:01.146 + echo '' 00:06:01.146 + rm /tmp/62.HCm /tmp/spdk_tgt_config.json.EGf 00:06:01.146 + exit 1 00:06:01.146 10:36:17 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:01.146 INFO: configuration change detected. 00:06:01.146 10:36:17 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:01.146 10:36:17 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:01.146 10:36:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:01.146 10:36:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.146 10:36:17 -- json_config/json_config.sh@360 -- # local ret=0 00:06:01.146 10:36:17 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:01.146 10:36:17 -- json_config/json_config.sh@370 -- # [[ -n 3335239 ]] 00:06:01.146 10:36:17 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:01.146 10:36:17 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:01.146 10:36:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:01.146 10:36:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.147 10:36:17 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:01.147 10:36:17 -- json_config/json_config.sh@246 -- # uname -s 00:06:01.147 10:36:17 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:01.147 10:36:17 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:01.147 10:36:17 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:01.147 10:36:17 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:01.147 10:36:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:01.147 10:36:17 -- common/autotest_common.sh@10 -- # set +x 00:06:01.147 10:36:17 -- json_config/json_config.sh@376 -- # killprocess 3335239 00:06:01.147 10:36:17 -- common/autotest_common.sh@926 -- # '[' -z 3335239 ']' 00:06:01.147 10:36:17 -- common/autotest_common.sh@930 -- # kill -0 3335239 00:06:01.147 10:36:17 -- common/autotest_common.sh@931 -- # uname 00:06:01.147 10:36:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:01.147 10:36:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3335239 00:06:01.147 10:36:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:01.147 10:36:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:01.147 10:36:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3335239' 00:06:01.147 killing process with pid 3335239 00:06:01.147 10:36:17 -- common/autotest_common.sh@945 -- # kill 3335239 00:06:01.147 10:36:17 -- common/autotest_common.sh@950 -- # wait 3335239 00:06:03.047 10:36:19 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.047 10:36:19 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:03.047 10:36:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:03.047 10:36:19 -- common/autotest_common.sh@10 -- # set +x 00:06:03.047 10:36:19 -- json_config/json_config.sh@381 -- # return 0 00:06:03.047 10:36:19 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:03.047 INFO: Success 00:06:03.047 00:06:03.047 real 0m15.968s 00:06:03.047 user 0m18.190s 00:06:03.047 sys 0m2.053s 00:06:03.047 10:36:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.047 10:36:19 -- common/autotest_common.sh@10 -- # set +x 00:06:03.047 ************************************ 00:06:03.047 END TEST json_config 00:06:03.047 ************************************ 00:06:03.047 10:36:19 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:03.047 10:36:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.047 10:36:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.047 10:36:19 -- common/autotest_common.sh@10 -- # set +x 00:06:03.047 ************************************ 00:06:03.047 START TEST json_config_extra_key 00:06:03.047 ************************************ 00:06:03.047 10:36:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:03.047 10:36:19 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.047 10:36:19 -- nvmf/common.sh@7 -- # uname -s 00:06:03.047 10:36:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.047 10:36:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.047 10:36:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.047 10:36:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.047 10:36:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.047 10:36:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.047 10:36:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.047 10:36:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.047 10:36:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.047 10:36:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.047 10:36:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:03.047 10:36:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:03.047 10:36:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.047 10:36:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.047 10:36:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.047 10:36:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.047 10:36:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.047 10:36:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.047 10:36:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.047 10:36:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.047 10:36:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.047 10:36:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.047 10:36:19 -- paths/export.sh@5 -- # export PATH 00:06:03.047 10:36:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.047 10:36:19 -- nvmf/common.sh@46 -- # : 0 00:06:03.047 10:36:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:03.047 10:36:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:03.047 10:36:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:03.047 10:36:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.048 10:36:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.048 10:36:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:03.048 10:36:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:03.048 10:36:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:03.048 INFO: launching applications... 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3336179 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:03.048 Waiting for target to run... 00:06:03.048 10:36:19 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3336179 /var/tmp/spdk_tgt.sock 00:06:03.048 10:36:19 -- common/autotest_common.sh@819 -- # '[' -z 3336179 ']' 00:06:03.048 10:36:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.048 10:36:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.048 10:36:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.048 10:36:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.048 10:36:19 -- common/autotest_common.sh@10 -- # set +x 00:06:03.048 [2024-07-10 10:36:19.575841] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:03.048 [2024-07-10 10:36:19.575922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336179 ] 00:06:03.048 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.306 [2024-07-10 10:36:19.905397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.306 [2024-07-10 10:36:19.966230] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.306 [2024-07-10 10:36:19.966413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.872 10:36:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.872 10:36:20 -- common/autotest_common.sh@852 -- # return 0 00:06:03.872 10:36:20 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:03.872 00:06:03.872 10:36:20 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:03.872 INFO: shutting down applications... 00:06:03.872 10:36:20 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:03.872 10:36:20 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:03.872 10:36:20 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:03.872 10:36:20 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3336179 ]] 00:06:03.872 10:36:20 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3336179 00:06:03.872 10:36:20 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:03.872 10:36:20 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:03.872 10:36:20 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3336179 00:06:03.872 10:36:20 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:04.440 10:36:20 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:04.440 10:36:20 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:04.440 10:36:20 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3336179 00:06:04.440 10:36:20 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:04.440 10:36:20 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:04.440 10:36:20 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:04.440 10:36:20 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:04.440 SPDK target shutdown done 00:06:04.440 10:36:20 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:04.440 Success 00:06:04.440 00:06:04.440 real 0m1.516s 00:06:04.440 user 0m1.491s 00:06:04.440 sys 0m0.407s 00:06:04.440 10:36:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.440 10:36:20 -- common/autotest_common.sh@10 -- # set +x 00:06:04.440 ************************************ 00:06:04.440 END TEST json_config_extra_key 00:06:04.440 ************************************ 00:06:04.440 10:36:21 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.440 10:36:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.440 10:36:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.440 10:36:21 -- common/autotest_common.sh@10 -- # set +x 00:06:04.440 ************************************ 00:06:04.440 START TEST alias_rpc 00:06:04.440 ************************************ 00:06:04.440 10:36:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.440 * Looking for test storage... 00:06:04.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:04.441 10:36:21 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.441 10:36:21 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3336369 00:06:04.441 10:36:21 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:04.441 10:36:21 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3336369 00:06:04.441 10:36:21 -- common/autotest_common.sh@819 -- # '[' -z 3336369 ']' 00:06:04.441 10:36:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.441 10:36:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.441 10:36:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.441 10:36:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.441 10:36:21 -- common/autotest_common.sh@10 -- # set +x 00:06:04.441 [2024-07-10 10:36:21.126961] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:04.441 [2024-07-10 10:36:21.127064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336369 ] 00:06:04.441 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.441 [2024-07-10 10:36:21.192633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.699 [2024-07-10 10:36:21.281956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.699 [2024-07-10 10:36:21.282131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.264 10:36:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.264 10:36:22 -- common/autotest_common.sh@852 -- # return 0 00:06:05.264 10:36:22 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:05.830 10:36:22 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3336369 00:06:05.830 10:36:22 -- common/autotest_common.sh@926 -- # '[' -z 3336369 ']' 00:06:05.830 10:36:22 -- common/autotest_common.sh@930 -- # kill -0 3336369 00:06:05.830 10:36:22 -- common/autotest_common.sh@931 -- # uname 00:06:05.830 10:36:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:05.830 10:36:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3336369 00:06:05.830 10:36:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:05.830 10:36:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:05.830 10:36:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3336369' 00:06:05.830 killing process with pid 3336369 00:06:05.830 10:36:22 -- common/autotest_common.sh@945 -- # kill 3336369 00:06:05.830 10:36:22 -- common/autotest_common.sh@950 -- # wait 3336369 00:06:06.088 00:06:06.088 real 0m1.758s 00:06:06.088 user 0m2.028s 00:06:06.088 sys 0m0.469s 00:06:06.088 10:36:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.088 10:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.088 ************************************ 00:06:06.088 END TEST alias_rpc 00:06:06.088 ************************************ 00:06:06.088 10:36:22 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:06:06.088 10:36:22 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:06.088 10:36:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:06.088 10:36:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.088 10:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.088 ************************************ 00:06:06.088 START TEST spdkcli_tcp 00:06:06.088 ************************************ 00:06:06.088 10:36:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:06.088 * Looking for test storage... 00:06:06.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:06.088 10:36:22 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:06.088 10:36:22 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:06.088 10:36:22 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:06.088 10:36:22 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:06.088 10:36:22 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:06.088 10:36:22 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:06.088 10:36:22 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:06.088 10:36:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:06.088 10:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.088 10:36:22 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3336688 00:06:06.088 10:36:22 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:06.088 10:36:22 -- spdkcli/tcp.sh@27 -- # waitforlisten 3336688 00:06:06.088 10:36:22 -- common/autotest_common.sh@819 -- # '[' -z 3336688 ']' 00:06:06.088 10:36:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.088 10:36:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:06.089 10:36:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.089 10:36:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:06.089 10:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.089 [2024-07-10 10:36:22.910133] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:06.089 [2024-07-10 10:36:22.910231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336688 ] 00:06:06.347 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.347 [2024-07-10 10:36:22.968729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.347 [2024-07-10 10:36:23.050940] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.347 [2024-07-10 10:36:23.054445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.347 [2024-07-10 10:36:23.054455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.285 10:36:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.285 10:36:23 -- common/autotest_common.sh@852 -- # return 0 00:06:07.285 10:36:23 -- spdkcli/tcp.sh@31 -- # socat_pid=3336824 00:06:07.285 10:36:23 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:07.285 10:36:23 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:07.285 [ 00:06:07.285 "bdev_malloc_delete", 00:06:07.285 "bdev_malloc_create", 00:06:07.285 "bdev_null_resize", 00:06:07.285 "bdev_null_delete", 00:06:07.285 "bdev_null_create", 00:06:07.285 "bdev_nvme_cuse_unregister", 00:06:07.285 "bdev_nvme_cuse_register", 00:06:07.285 "bdev_opal_new_user", 00:06:07.285 "bdev_opal_set_lock_state", 00:06:07.285 "bdev_opal_delete", 00:06:07.285 "bdev_opal_get_info", 00:06:07.285 "bdev_opal_create", 00:06:07.285 "bdev_nvme_opal_revert", 00:06:07.285 "bdev_nvme_opal_init", 00:06:07.285 "bdev_nvme_send_cmd", 00:06:07.285 "bdev_nvme_get_path_iostat", 00:06:07.285 "bdev_nvme_get_mdns_discovery_info", 00:06:07.285 "bdev_nvme_stop_mdns_discovery", 00:06:07.285 "bdev_nvme_start_mdns_discovery", 00:06:07.285 "bdev_nvme_set_multipath_policy", 00:06:07.285 "bdev_nvme_set_preferred_path", 00:06:07.285 "bdev_nvme_get_io_paths", 00:06:07.285 "bdev_nvme_remove_error_injection", 00:06:07.285 "bdev_nvme_add_error_injection", 00:06:07.285 "bdev_nvme_get_discovery_info", 00:06:07.285 "bdev_nvme_stop_discovery", 00:06:07.285 "bdev_nvme_start_discovery", 00:06:07.285 "bdev_nvme_get_controller_health_info", 00:06:07.285 "bdev_nvme_disable_controller", 00:06:07.285 "bdev_nvme_enable_controller", 00:06:07.285 "bdev_nvme_reset_controller", 00:06:07.285 "bdev_nvme_get_transport_statistics", 00:06:07.285 "bdev_nvme_apply_firmware", 00:06:07.285 "bdev_nvme_detach_controller", 00:06:07.285 "bdev_nvme_get_controllers", 00:06:07.285 "bdev_nvme_attach_controller", 00:06:07.285 "bdev_nvme_set_hotplug", 00:06:07.285 "bdev_nvme_set_options", 00:06:07.285 "bdev_passthru_delete", 00:06:07.285 "bdev_passthru_create", 00:06:07.285 "bdev_lvol_grow_lvstore", 00:06:07.285 "bdev_lvol_get_lvols", 00:06:07.285 "bdev_lvol_get_lvstores", 00:06:07.285 "bdev_lvol_delete", 00:06:07.285 "bdev_lvol_set_read_only", 00:06:07.285 "bdev_lvol_resize", 00:06:07.285 "bdev_lvol_decouple_parent", 00:06:07.285 "bdev_lvol_inflate", 00:06:07.285 "bdev_lvol_rename", 00:06:07.285 "bdev_lvol_clone_bdev", 00:06:07.285 "bdev_lvol_clone", 00:06:07.285 "bdev_lvol_snapshot", 00:06:07.285 "bdev_lvol_create", 00:06:07.285 "bdev_lvol_delete_lvstore", 00:06:07.285 "bdev_lvol_rename_lvstore", 00:06:07.285 "bdev_lvol_create_lvstore", 00:06:07.285 "bdev_raid_set_options", 00:06:07.285 "bdev_raid_remove_base_bdev", 00:06:07.285 "bdev_raid_add_base_bdev", 00:06:07.285 "bdev_raid_delete", 00:06:07.285 "bdev_raid_create", 00:06:07.285 "bdev_raid_get_bdevs", 00:06:07.285 "bdev_error_inject_error", 00:06:07.285 "bdev_error_delete", 00:06:07.285 "bdev_error_create", 00:06:07.285 "bdev_split_delete", 00:06:07.285 "bdev_split_create", 00:06:07.285 "bdev_delay_delete", 00:06:07.285 "bdev_delay_create", 00:06:07.285 "bdev_delay_update_latency", 00:06:07.285 "bdev_zone_block_delete", 00:06:07.285 "bdev_zone_block_create", 00:06:07.285 "blobfs_create", 00:06:07.285 "blobfs_detect", 00:06:07.285 "blobfs_set_cache_size", 00:06:07.285 "bdev_aio_delete", 00:06:07.285 "bdev_aio_rescan", 00:06:07.285 "bdev_aio_create", 00:06:07.285 "bdev_ftl_set_property", 00:06:07.285 "bdev_ftl_get_properties", 00:06:07.285 "bdev_ftl_get_stats", 00:06:07.285 "bdev_ftl_unmap", 00:06:07.285 "bdev_ftl_unload", 00:06:07.285 "bdev_ftl_delete", 00:06:07.285 "bdev_ftl_load", 00:06:07.285 "bdev_ftl_create", 00:06:07.285 "bdev_virtio_attach_controller", 00:06:07.285 "bdev_virtio_scsi_get_devices", 00:06:07.285 "bdev_virtio_detach_controller", 00:06:07.285 "bdev_virtio_blk_set_hotplug", 00:06:07.285 "bdev_iscsi_delete", 00:06:07.285 "bdev_iscsi_create", 00:06:07.285 "bdev_iscsi_set_options", 00:06:07.285 "accel_error_inject_error", 00:06:07.285 "ioat_scan_accel_module", 00:06:07.285 "dsa_scan_accel_module", 00:06:07.285 "iaa_scan_accel_module", 00:06:07.285 "vfu_virtio_create_scsi_endpoint", 00:06:07.285 "vfu_virtio_scsi_remove_target", 00:06:07.285 "vfu_virtio_scsi_add_target", 00:06:07.285 "vfu_virtio_create_blk_endpoint", 00:06:07.285 "vfu_virtio_delete_endpoint", 00:06:07.285 "iscsi_set_options", 00:06:07.285 "iscsi_get_auth_groups", 00:06:07.286 "iscsi_auth_group_remove_secret", 00:06:07.286 "iscsi_auth_group_add_secret", 00:06:07.286 "iscsi_delete_auth_group", 00:06:07.286 "iscsi_create_auth_group", 00:06:07.286 "iscsi_set_discovery_auth", 00:06:07.286 "iscsi_get_options", 00:06:07.286 "iscsi_target_node_request_logout", 00:06:07.286 "iscsi_target_node_set_redirect", 00:06:07.286 "iscsi_target_node_set_auth", 00:06:07.286 "iscsi_target_node_add_lun", 00:06:07.286 "iscsi_get_connections", 00:06:07.286 "iscsi_portal_group_set_auth", 00:06:07.286 "iscsi_start_portal_group", 00:06:07.286 "iscsi_delete_portal_group", 00:06:07.286 "iscsi_create_portal_group", 00:06:07.286 "iscsi_get_portal_groups", 00:06:07.286 "iscsi_delete_target_node", 00:06:07.286 "iscsi_target_node_remove_pg_ig_maps", 00:06:07.286 "iscsi_target_node_add_pg_ig_maps", 00:06:07.286 "iscsi_create_target_node", 00:06:07.286 "iscsi_get_target_nodes", 00:06:07.286 "iscsi_delete_initiator_group", 00:06:07.286 "iscsi_initiator_group_remove_initiators", 00:06:07.286 "iscsi_initiator_group_add_initiators", 00:06:07.286 "iscsi_create_initiator_group", 00:06:07.286 "iscsi_get_initiator_groups", 00:06:07.286 "nvmf_set_crdt", 00:06:07.286 "nvmf_set_config", 00:06:07.286 "nvmf_set_max_subsystems", 00:06:07.286 "nvmf_subsystem_get_listeners", 00:06:07.286 "nvmf_subsystem_get_qpairs", 00:06:07.286 "nvmf_subsystem_get_controllers", 00:06:07.286 "nvmf_get_stats", 00:06:07.286 "nvmf_get_transports", 00:06:07.286 "nvmf_create_transport", 00:06:07.286 "nvmf_get_targets", 00:06:07.286 "nvmf_delete_target", 00:06:07.286 "nvmf_create_target", 00:06:07.286 "nvmf_subsystem_allow_any_host", 00:06:07.286 "nvmf_subsystem_remove_host", 00:06:07.286 "nvmf_subsystem_add_host", 00:06:07.286 "nvmf_subsystem_remove_ns", 00:06:07.286 "nvmf_subsystem_add_ns", 00:06:07.286 "nvmf_subsystem_listener_set_ana_state", 00:06:07.286 "nvmf_discovery_get_referrals", 00:06:07.286 "nvmf_discovery_remove_referral", 00:06:07.286 "nvmf_discovery_add_referral", 00:06:07.286 "nvmf_subsystem_remove_listener", 00:06:07.286 "nvmf_subsystem_add_listener", 00:06:07.286 "nvmf_delete_subsystem", 00:06:07.286 "nvmf_create_subsystem", 00:06:07.286 "nvmf_get_subsystems", 00:06:07.286 "env_dpdk_get_mem_stats", 00:06:07.286 "nbd_get_disks", 00:06:07.286 "nbd_stop_disk", 00:06:07.286 "nbd_start_disk", 00:06:07.286 "ublk_recover_disk", 00:06:07.286 "ublk_get_disks", 00:06:07.286 "ublk_stop_disk", 00:06:07.286 "ublk_start_disk", 00:06:07.286 "ublk_destroy_target", 00:06:07.286 "ublk_create_target", 00:06:07.286 "virtio_blk_create_transport", 00:06:07.286 "virtio_blk_get_transports", 00:06:07.286 "vhost_controller_set_coalescing", 00:06:07.286 "vhost_get_controllers", 00:06:07.286 "vhost_delete_controller", 00:06:07.286 "vhost_create_blk_controller", 00:06:07.286 "vhost_scsi_controller_remove_target", 00:06:07.286 "vhost_scsi_controller_add_target", 00:06:07.286 "vhost_start_scsi_controller", 00:06:07.286 "vhost_create_scsi_controller", 00:06:07.286 "thread_set_cpumask", 00:06:07.286 "framework_get_scheduler", 00:06:07.286 "framework_set_scheduler", 00:06:07.286 "framework_get_reactors", 00:06:07.286 "thread_get_io_channels", 00:06:07.286 "thread_get_pollers", 00:06:07.286 "thread_get_stats", 00:06:07.286 "framework_monitor_context_switch", 00:06:07.286 "spdk_kill_instance", 00:06:07.286 "log_enable_timestamps", 00:06:07.286 "log_get_flags", 00:06:07.286 "log_clear_flag", 00:06:07.286 "log_set_flag", 00:06:07.286 "log_get_level", 00:06:07.286 "log_set_level", 00:06:07.286 "log_get_print_level", 00:06:07.286 "log_set_print_level", 00:06:07.286 "framework_enable_cpumask_locks", 00:06:07.286 "framework_disable_cpumask_locks", 00:06:07.286 "framework_wait_init", 00:06:07.286 "framework_start_init", 00:06:07.286 "scsi_get_devices", 00:06:07.286 "bdev_get_histogram", 00:06:07.286 "bdev_enable_histogram", 00:06:07.286 "bdev_set_qos_limit", 00:06:07.286 "bdev_set_qd_sampling_period", 00:06:07.286 "bdev_get_bdevs", 00:06:07.286 "bdev_reset_iostat", 00:06:07.286 "bdev_get_iostat", 00:06:07.286 "bdev_examine", 00:06:07.286 "bdev_wait_for_examine", 00:06:07.286 "bdev_set_options", 00:06:07.286 "notify_get_notifications", 00:06:07.286 "notify_get_types", 00:06:07.286 "accel_get_stats", 00:06:07.286 "accel_set_options", 00:06:07.286 "accel_set_driver", 00:06:07.286 "accel_crypto_key_destroy", 00:06:07.286 "accel_crypto_keys_get", 00:06:07.286 "accel_crypto_key_create", 00:06:07.286 "accel_assign_opc", 00:06:07.286 "accel_get_module_info", 00:06:07.286 "accel_get_opc_assignments", 00:06:07.286 "vmd_rescan", 00:06:07.286 "vmd_remove_device", 00:06:07.286 "vmd_enable", 00:06:07.286 "sock_set_default_impl", 00:06:07.286 "sock_impl_set_options", 00:06:07.286 "sock_impl_get_options", 00:06:07.286 "iobuf_get_stats", 00:06:07.286 "iobuf_set_options", 00:06:07.286 "framework_get_pci_devices", 00:06:07.286 "framework_get_config", 00:06:07.286 "framework_get_subsystems", 00:06:07.286 "vfu_tgt_set_base_path", 00:06:07.286 "trace_get_info", 00:06:07.286 "trace_get_tpoint_group_mask", 00:06:07.286 "trace_disable_tpoint_group", 00:06:07.286 "trace_enable_tpoint_group", 00:06:07.286 "trace_clear_tpoint_mask", 00:06:07.286 "trace_set_tpoint_mask", 00:06:07.286 "spdk_get_version", 00:06:07.286 "rpc_get_methods" 00:06:07.286 ] 00:06:07.286 10:36:24 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:07.286 10:36:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:07.286 10:36:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.286 10:36:24 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:07.286 10:36:24 -- spdkcli/tcp.sh@38 -- # killprocess 3336688 00:06:07.286 10:36:24 -- common/autotest_common.sh@926 -- # '[' -z 3336688 ']' 00:06:07.286 10:36:24 -- common/autotest_common.sh@930 -- # kill -0 3336688 00:06:07.286 10:36:24 -- common/autotest_common.sh@931 -- # uname 00:06:07.286 10:36:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:07.286 10:36:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3336688 00:06:07.544 10:36:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:07.544 10:36:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:07.544 10:36:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3336688' 00:06:07.544 killing process with pid 3336688 00:06:07.544 10:36:24 -- common/autotest_common.sh@945 -- # kill 3336688 00:06:07.544 10:36:24 -- common/autotest_common.sh@950 -- # wait 3336688 00:06:07.802 00:06:07.802 real 0m1.703s 00:06:07.802 user 0m3.348s 00:06:07.802 sys 0m0.450s 00:06:07.802 10:36:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.802 10:36:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.802 ************************************ 00:06:07.802 END TEST spdkcli_tcp 00:06:07.802 ************************************ 00:06:07.802 10:36:24 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:07.802 10:36:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.802 10:36:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.802 10:36:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.802 ************************************ 00:06:07.802 START TEST dpdk_mem_utility 00:06:07.802 ************************************ 00:06:07.802 10:36:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:07.802 * Looking for test storage... 00:06:07.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:07.802 10:36:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:07.802 10:36:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3337020 00:06:07.802 10:36:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.802 10:36:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3337020 00:06:07.802 10:36:24 -- common/autotest_common.sh@819 -- # '[' -z 3337020 ']' 00:06:07.802 10:36:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.802 10:36:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.802 10:36:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.802 10:36:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.802 10:36:24 -- common/autotest_common.sh@10 -- # set +x 00:06:08.061 [2024-07-10 10:36:24.633752] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:08.061 [2024-07-10 10:36:24.633844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337020 ] 00:06:08.061 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.061 [2024-07-10 10:36:24.690861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.061 [2024-07-10 10:36:24.779023] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.061 [2024-07-10 10:36:24.779200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.997 10:36:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.997 10:36:25 -- common/autotest_common.sh@852 -- # return 0 00:06:08.997 10:36:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:08.997 10:36:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:08.997 10:36:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.997 10:36:25 -- common/autotest_common.sh@10 -- # set +x 00:06:08.997 { 00:06:08.997 "filename": "/tmp/spdk_mem_dump.txt" 00:06:08.997 } 00:06:08.997 10:36:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.997 10:36:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:08.997 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:08.997 1 heaps totaling size 814.000000 MiB 00:06:08.997 size: 814.000000 MiB heap id: 0 00:06:08.997 end heaps---------- 00:06:08.997 8 mempools totaling size 598.116089 MiB 00:06:08.997 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:08.997 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:08.997 size: 84.521057 MiB name: bdev_io_3337020 00:06:08.997 size: 51.011292 MiB name: evtpool_3337020 00:06:08.997 size: 50.003479 MiB name: msgpool_3337020 00:06:08.998 size: 21.763794 MiB name: PDU_Pool 00:06:08.998 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:08.998 size: 0.026123 MiB name: Session_Pool 00:06:08.998 end mempools------- 00:06:08.998 6 memzones totaling size 4.142822 MiB 00:06:08.998 size: 1.000366 MiB name: RG_ring_0_3337020 00:06:08.998 size: 1.000366 MiB name: RG_ring_1_3337020 00:06:08.998 size: 1.000366 MiB name: RG_ring_4_3337020 00:06:08.998 size: 1.000366 MiB name: RG_ring_5_3337020 00:06:08.998 size: 0.125366 MiB name: RG_ring_2_3337020 00:06:08.998 size: 0.015991 MiB name: RG_ring_3_3337020 00:06:08.998 end memzones------- 00:06:08.998 10:36:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:08.998 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:08.998 list of free elements. size: 12.519348 MiB 00:06:08.998 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:08.998 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:08.998 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:08.998 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:08.998 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:08.998 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:08.998 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:08.998 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:08.998 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:08.998 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:08.998 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:08.998 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:08.998 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:08.998 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:08.998 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:08.998 list of standard malloc elements. size: 199.218079 MiB 00:06:08.998 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:08.998 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:08.998 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:08.998 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:08.998 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:08.998 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:08.998 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:08.998 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:08.998 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:08.998 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:08.998 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:08.998 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:08.998 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:08.998 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:08.998 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:08.998 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:08.998 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:08.998 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:08.998 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:08.998 list of memzone associated elements. size: 602.262573 MiB 00:06:08.998 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:08.998 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:08.998 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:08.998 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:08.998 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:08.998 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3337020_0 00:06:08.998 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:08.998 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3337020_0 00:06:08.998 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:08.998 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3337020_0 00:06:08.998 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:08.998 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:08.998 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:08.998 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:08.998 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:08.998 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3337020 00:06:08.998 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:08.998 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3337020 00:06:08.998 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:08.998 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3337020 00:06:08.998 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:08.998 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:08.998 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:08.998 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:08.998 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:08.998 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:08.998 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:08.998 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:08.998 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:08.998 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3337020 00:06:08.998 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:08.998 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3337020 00:06:08.998 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:08.998 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3337020 00:06:08.998 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:08.998 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3337020 00:06:08.998 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:08.998 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3337020 00:06:08.998 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:08.998 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:08.998 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:08.998 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:08.998 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:08.998 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:08.998 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:08.998 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3337020 00:06:08.998 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:08.998 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:08.998 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:08.998 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:08.998 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:08.998 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3337020 00:06:08.998 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:08.998 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:08.998 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:08.998 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3337020 00:06:08.998 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:08.998 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3337020 00:06:08.998 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:08.998 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:08.998 10:36:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:08.998 10:36:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3337020 00:06:08.998 10:36:25 -- common/autotest_common.sh@926 -- # '[' -z 3337020 ']' 00:06:08.998 10:36:25 -- common/autotest_common.sh@930 -- # kill -0 3337020 00:06:08.998 10:36:25 -- common/autotest_common.sh@931 -- # uname 00:06:08.998 10:36:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.998 10:36:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3337020 00:06:08.998 10:36:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:08.998 10:36:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:08.998 10:36:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3337020' 00:06:08.998 killing process with pid 3337020 00:06:08.998 10:36:25 -- common/autotest_common.sh@945 -- # kill 3337020 00:06:08.998 10:36:25 -- common/autotest_common.sh@950 -- # wait 3337020 00:06:09.565 00:06:09.565 real 0m1.574s 00:06:09.565 user 0m1.708s 00:06:09.565 sys 0m0.452s 00:06:09.565 10:36:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.565 10:36:26 -- common/autotest_common.sh@10 -- # set +x 00:06:09.565 ************************************ 00:06:09.565 END TEST dpdk_mem_utility 00:06:09.565 ************************************ 00:06:09.565 10:36:26 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:09.565 10:36:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.565 10:36:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.565 10:36:26 -- common/autotest_common.sh@10 -- # set +x 00:06:09.565 ************************************ 00:06:09.565 START TEST event 00:06:09.565 ************************************ 00:06:09.565 10:36:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:09.565 * Looking for test storage... 00:06:09.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:09.565 10:36:26 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:09.565 10:36:26 -- bdev/nbd_common.sh@6 -- # set -e 00:06:09.565 10:36:26 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.565 10:36:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:09.566 10:36:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.566 10:36:26 -- common/autotest_common.sh@10 -- # set +x 00:06:09.566 ************************************ 00:06:09.566 START TEST event_perf 00:06:09.566 ************************************ 00:06:09.566 10:36:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.566 Running I/O for 1 seconds...[2024-07-10 10:36:26.204047] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:09.566 [2024-07-10 10:36:26.204131] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337214 ] 00:06:09.566 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.566 [2024-07-10 10:36:26.270989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.566 [2024-07-10 10:36:26.370303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.566 [2024-07-10 10:36:26.370354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.566 [2024-07-10 10:36:26.370475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.566 [2024-07-10 10:36:26.370479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.938 Running I/O for 1 seconds... 00:06:10.938 lcore 0: 240493 00:06:10.938 lcore 1: 240493 00:06:10.938 lcore 2: 240494 00:06:10.938 lcore 3: 240491 00:06:10.938 done. 00:06:10.938 00:06:10.938 real 0m1.258s 00:06:10.938 user 0m4.159s 00:06:10.938 sys 0m0.093s 00:06:10.938 10:36:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.938 10:36:27 -- common/autotest_common.sh@10 -- # set +x 00:06:10.938 ************************************ 00:06:10.938 END TEST event_perf 00:06:10.938 ************************************ 00:06:10.938 10:36:27 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:10.938 10:36:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:10.938 10:36:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.938 10:36:27 -- common/autotest_common.sh@10 -- # set +x 00:06:10.938 ************************************ 00:06:10.938 START TEST event_reactor 00:06:10.938 ************************************ 00:06:10.938 10:36:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:10.938 [2024-07-10 10:36:27.486107] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:10.938 [2024-07-10 10:36:27.486174] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337374 ] 00:06:10.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.938 [2024-07-10 10:36:27.545294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.938 [2024-07-10 10:36:27.635698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.312 test_start 00:06:12.312 oneshot 00:06:12.312 tick 100 00:06:12.312 tick 100 00:06:12.312 tick 250 00:06:12.312 tick 100 00:06:12.312 tick 100 00:06:12.312 tick 100 00:06:12.312 tick 250 00:06:12.312 tick 500 00:06:12.312 tick 100 00:06:12.312 tick 100 00:06:12.312 tick 250 00:06:12.312 tick 100 00:06:12.312 tick 100 00:06:12.312 test_end 00:06:12.312 00:06:12.312 real 0m1.243s 00:06:12.312 user 0m1.162s 00:06:12.312 sys 0m0.076s 00:06:12.312 10:36:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.312 10:36:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.312 ************************************ 00:06:12.312 END TEST event_reactor 00:06:12.312 ************************************ 00:06:12.312 10:36:28 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.312 10:36:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:12.312 10:36:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.312 10:36:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.312 ************************************ 00:06:12.312 START TEST event_reactor_perf 00:06:12.312 ************************************ 00:06:12.312 10:36:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.312 [2024-07-10 10:36:28.754656] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:12.312 [2024-07-10 10:36:28.754748] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337534 ] 00:06:12.312 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.312 [2024-07-10 10:36:28.815316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.312 [2024-07-10 10:36:28.905366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.246 test_start 00:06:13.246 test_end 00:06:13.246 Performance: 351769 events per second 00:06:13.246 00:06:13.246 real 0m1.246s 00:06:13.246 user 0m1.160s 00:06:13.246 sys 0m0.081s 00:06:13.246 10:36:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.246 10:36:29 -- common/autotest_common.sh@10 -- # set +x 00:06:13.246 ************************************ 00:06:13.246 END TEST event_reactor_perf 00:06:13.246 ************************************ 00:06:13.246 10:36:30 -- event/event.sh@49 -- # uname -s 00:06:13.246 10:36:30 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:13.246 10:36:30 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:13.246 10:36:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.246 10:36:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.246 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.246 ************************************ 00:06:13.246 START TEST event_scheduler 00:06:13.246 ************************************ 00:06:13.246 10:36:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:13.246 * Looking for test storage... 00:06:13.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:13.504 10:36:30 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:13.504 10:36:30 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3337741 00:06:13.504 10:36:30 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:13.504 10:36:30 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.504 10:36:30 -- scheduler/scheduler.sh@37 -- # waitforlisten 3337741 00:06:13.504 10:36:30 -- common/autotest_common.sh@819 -- # '[' -z 3337741 ']' 00:06:13.504 10:36:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.504 10:36:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.504 10:36:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.504 10:36:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.504 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.504 [2024-07-10 10:36:30.107846] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:13.504 [2024-07-10 10:36:30.107949] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337741 ] 00:06:13.504 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.504 [2024-07-10 10:36:30.172601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.504 [2024-07-10 10:36:30.259151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.504 [2024-07-10 10:36:30.259228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.504 [2024-07-10 10:36:30.259231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.504 [2024-07-10 10:36:30.259208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.504 10:36:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:13.504 10:36:30 -- common/autotest_common.sh@852 -- # return 0 00:06:13.504 10:36:30 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:13.504 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.504 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.504 POWER: Env isn't set yet! 00:06:13.504 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:13.504 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:13.504 POWER: Cannot get available frequencies of lcore 0 00:06:13.504 POWER: Attempting to initialise PSTAT power management... 00:06:13.504 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:13.504 POWER: Initialized successfully for lcore 0 power management 00:06:13.762 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:13.762 POWER: Initialized successfully for lcore 1 power management 00:06:13.762 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:13.762 POWER: Initialized successfully for lcore 2 power management 00:06:13.762 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:13.762 POWER: Initialized successfully for lcore 3 power management 00:06:13.762 [2024-07-10 10:36:30.358619] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:13.762 [2024-07-10 10:36:30.358636] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:13.763 [2024-07-10 10:36:30.358646] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 [2024-07-10 10:36:30.459521] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:13.763 10:36:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.763 10:36:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 ************************************ 00:06:13.763 START TEST scheduler_create_thread 00:06:13.763 ************************************ 00:06:13.763 10:36:30 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 2 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 3 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 4 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 5 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 6 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 7 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 8 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 9 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 10 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:13.763 10:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.763 10:36:30 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:13.763 10:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.763 10:36:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.662 10:36:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.662 10:36:32 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:15.662 10:36:32 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:15.662 10:36:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.662 10:36:32 -- common/autotest_common.sh@10 -- # set +x 00:06:16.596 10:36:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:16.596 00:06:16.596 real 0m2.616s 00:06:16.596 user 0m0.012s 00:06:16.596 sys 0m0.003s 00:06:16.596 10:36:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.596 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:06:16.596 ************************************ 00:06:16.596 END TEST scheduler_create_thread 00:06:16.596 ************************************ 00:06:16.596 10:36:33 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:16.596 10:36:33 -- scheduler/scheduler.sh@46 -- # killprocess 3337741 00:06:16.596 10:36:33 -- common/autotest_common.sh@926 -- # '[' -z 3337741 ']' 00:06:16.596 10:36:33 -- common/autotest_common.sh@930 -- # kill -0 3337741 00:06:16.596 10:36:33 -- common/autotest_common.sh@931 -- # uname 00:06:16.596 10:36:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:16.596 10:36:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3337741 00:06:16.596 10:36:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:16.596 10:36:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:16.596 10:36:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3337741' 00:06:16.596 killing process with pid 3337741 00:06:16.596 10:36:33 -- common/autotest_common.sh@945 -- # kill 3337741 00:06:16.596 10:36:33 -- common/autotest_common.sh@950 -- # wait 3337741 00:06:16.854 [2024-07-10 10:36:33.562428] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:17.112 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:17.112 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:17.112 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:17.112 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:17.112 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:17.112 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:17.112 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:17.112 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:17.112 00:06:17.112 real 0m3.774s 00:06:17.112 user 0m5.742s 00:06:17.112 sys 0m0.301s 00:06:17.112 10:36:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.112 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.112 ************************************ 00:06:17.112 END TEST event_scheduler 00:06:17.112 ************************************ 00:06:17.112 10:36:33 -- event/event.sh@51 -- # modprobe -n nbd 00:06:17.112 10:36:33 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:17.112 10:36:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.112 10:36:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.112 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.112 ************************************ 00:06:17.112 START TEST app_repeat 00:06:17.112 ************************************ 00:06:17.112 10:36:33 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:17.112 10:36:33 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.112 10:36:33 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.112 10:36:33 -- event/event.sh@13 -- # local nbd_list 00:06:17.112 10:36:33 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.112 10:36:33 -- event/event.sh@14 -- # local bdev_list 00:06:17.112 10:36:33 -- event/event.sh@15 -- # local repeat_times=4 00:06:17.112 10:36:33 -- event/event.sh@17 -- # modprobe nbd 00:06:17.112 10:36:33 -- event/event.sh@19 -- # repeat_pid=3338301 00:06:17.112 10:36:33 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:17.112 10:36:33 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.112 10:36:33 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3338301' 00:06:17.112 Process app_repeat pid: 3338301 00:06:17.112 10:36:33 -- event/event.sh@23 -- # for i in {0..2} 00:06:17.112 10:36:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:17.112 spdk_app_start Round 0 00:06:17.112 10:36:33 -- event/event.sh@25 -- # waitforlisten 3338301 /var/tmp/spdk-nbd.sock 00:06:17.112 10:36:33 -- common/autotest_common.sh@819 -- # '[' -z 3338301 ']' 00:06:17.112 10:36:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.112 10:36:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.112 10:36:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.112 10:36:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.112 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.112 [2024-07-10 10:36:33.851612] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:17.112 [2024-07-10 10:36:33.851686] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338301 ] 00:06:17.112 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.112 [2024-07-10 10:36:33.915503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.371 [2024-07-10 10:36:34.013452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.371 [2024-07-10 10:36:34.013457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.371 10:36:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.371 10:36:34 -- common/autotest_common.sh@852 -- # return 0 00:06:17.371 10:36:34 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.628 Malloc0 00:06:17.628 10:36:34 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.887 Malloc1 00:06:17.887 10:36:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@12 -- # local i 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.887 10:36:34 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.145 /dev/nbd0 00:06:18.145 10:36:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.145 10:36:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.145 10:36:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:18.145 10:36:34 -- common/autotest_common.sh@857 -- # local i 00:06:18.145 10:36:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:18.145 10:36:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:18.145 10:36:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:18.145 10:36:34 -- common/autotest_common.sh@861 -- # break 00:06:18.145 10:36:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:18.145 10:36:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:18.145 10:36:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.145 1+0 records in 00:06:18.145 1+0 records out 00:06:18.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158907 s, 25.8 MB/s 00:06:18.145 10:36:34 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.145 10:36:34 -- common/autotest_common.sh@874 -- # size=4096 00:06:18.145 10:36:34 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.145 10:36:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:18.145 10:36:34 -- common/autotest_common.sh@877 -- # return 0 00:06:18.145 10:36:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.145 10:36:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.145 10:36:34 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.402 /dev/nbd1 00:06:18.402 10:36:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.402 10:36:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.402 10:36:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:18.402 10:36:35 -- common/autotest_common.sh@857 -- # local i 00:06:18.402 10:36:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:18.402 10:36:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:18.402 10:36:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:18.402 10:36:35 -- common/autotest_common.sh@861 -- # break 00:06:18.402 10:36:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:18.402 10:36:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:18.402 10:36:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.402 1+0 records in 00:06:18.402 1+0 records out 00:06:18.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202253 s, 20.3 MB/s 00:06:18.402 10:36:35 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.402 10:36:35 -- common/autotest_common.sh@874 -- # size=4096 00:06:18.402 10:36:35 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.402 10:36:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:18.402 10:36:35 -- common/autotest_common.sh@877 -- # return 0 00:06:18.402 10:36:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.402 10:36:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.402 10:36:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.402 10:36:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.402 10:36:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.660 10:36:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.660 { 00:06:18.660 "nbd_device": "/dev/nbd0", 00:06:18.660 "bdev_name": "Malloc0" 00:06:18.660 }, 00:06:18.660 { 00:06:18.660 "nbd_device": "/dev/nbd1", 00:06:18.660 "bdev_name": "Malloc1" 00:06:18.660 } 00:06:18.660 ]' 00:06:18.660 10:36:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.660 { 00:06:18.660 "nbd_device": "/dev/nbd0", 00:06:18.660 "bdev_name": "Malloc0" 00:06:18.660 }, 00:06:18.660 { 00:06:18.660 "nbd_device": "/dev/nbd1", 00:06:18.660 "bdev_name": "Malloc1" 00:06:18.660 } 00:06:18.660 ]' 00:06:18.660 10:36:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.918 /dev/nbd1' 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.918 /dev/nbd1' 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.918 256+0 records in 00:06:18.918 256+0 records out 00:06:18.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524015 s, 200 MB/s 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.918 256+0 records in 00:06:18.918 256+0 records out 00:06:18.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234642 s, 44.7 MB/s 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.918 256+0 records in 00:06:18.918 256+0 records out 00:06:18.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024389 s, 43.0 MB/s 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@51 -- # local i 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.918 10:36:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.176 10:36:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.176 10:36:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.176 10:36:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.176 10:36:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.176 10:36:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.176 10:36:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.176 10:36:35 -- bdev/nbd_common.sh@41 -- # break 00:06:19.176 10:36:35 -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.176 10:36:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.176 10:36:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.434 10:36:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.434 10:36:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.434 10:36:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.434 10:36:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.434 10:36:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.434 10:36:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.434 10:36:36 -- bdev/nbd_common.sh@41 -- # break 00:06:19.434 10:36:36 -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.434 10:36:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.434 10:36:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.434 10:36:36 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@65 -- # true 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.691 10:36:36 -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.691 10:36:36 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.949 10:36:36 -- event/event.sh@35 -- # sleep 3 00:06:20.207 [2024-07-10 10:36:36.870768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.207 [2024-07-10 10:36:36.960028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.207 [2024-07-10 10:36:36.960029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.207 [2024-07-10 10:36:37.021260] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.207 [2024-07-10 10:36:37.021334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.591 10:36:39 -- event/event.sh@23 -- # for i in {0..2} 00:06:23.591 10:36:39 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:23.591 spdk_app_start Round 1 00:06:23.591 10:36:39 -- event/event.sh@25 -- # waitforlisten 3338301 /var/tmp/spdk-nbd.sock 00:06:23.591 10:36:39 -- common/autotest_common.sh@819 -- # '[' -z 3338301 ']' 00:06:23.591 10:36:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.591 10:36:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.591 10:36:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.591 10:36:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.591 10:36:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.591 10:36:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.591 10:36:39 -- common/autotest_common.sh@852 -- # return 0 00:06:23.591 10:36:39 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.591 Malloc0 00:06:23.591 10:36:40 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.850 Malloc1 00:06:23.850 10:36:40 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@12 -- # local i 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.850 /dev/nbd0 00:06:23.850 10:36:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.107 10:36:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.107 10:36:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:24.107 10:36:40 -- common/autotest_common.sh@857 -- # local i 00:06:24.107 10:36:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:24.107 10:36:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:24.107 10:36:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:24.107 10:36:40 -- common/autotest_common.sh@861 -- # break 00:06:24.107 10:36:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:24.107 10:36:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:24.107 10:36:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.107 1+0 records in 00:06:24.107 1+0 records out 00:06:24.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184818 s, 22.2 MB/s 00:06:24.107 10:36:40 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.107 10:36:40 -- common/autotest_common.sh@874 -- # size=4096 00:06:24.107 10:36:40 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.107 10:36:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:24.107 10:36:40 -- common/autotest_common.sh@877 -- # return 0 00:06:24.107 10:36:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.107 10:36:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.107 10:36:40 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.107 /dev/nbd1 00:06:24.365 10:36:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.365 10:36:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.365 10:36:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:24.365 10:36:40 -- common/autotest_common.sh@857 -- # local i 00:06:24.365 10:36:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:24.365 10:36:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:24.365 10:36:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:24.365 10:36:40 -- common/autotest_common.sh@861 -- # break 00:06:24.365 10:36:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:24.365 10:36:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:24.365 10:36:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.365 1+0 records in 00:06:24.365 1+0 records out 00:06:24.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186968 s, 21.9 MB/s 00:06:24.365 10:36:40 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.365 10:36:40 -- common/autotest_common.sh@874 -- # size=4096 00:06:24.365 10:36:40 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.365 10:36:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:24.365 10:36:40 -- common/autotest_common.sh@877 -- # return 0 00:06:24.365 10:36:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.365 10:36:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.365 10:36:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.365 10:36:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.365 10:36:40 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.622 { 00:06:24.622 "nbd_device": "/dev/nbd0", 00:06:24.622 "bdev_name": "Malloc0" 00:06:24.622 }, 00:06:24.622 { 00:06:24.622 "nbd_device": "/dev/nbd1", 00:06:24.622 "bdev_name": "Malloc1" 00:06:24.622 } 00:06:24.622 ]' 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.622 { 00:06:24.622 "nbd_device": "/dev/nbd0", 00:06:24.622 "bdev_name": "Malloc0" 00:06:24.622 }, 00:06:24.622 { 00:06:24.622 "nbd_device": "/dev/nbd1", 00:06:24.622 "bdev_name": "Malloc1" 00:06:24.622 } 00:06:24.622 ]' 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.622 /dev/nbd1' 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.622 /dev/nbd1' 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.622 256+0 records in 00:06:24.622 256+0 records out 00:06:24.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462975 s, 226 MB/s 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.622 256+0 records in 00:06:24.622 256+0 records out 00:06:24.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237828 s, 44.1 MB/s 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.622 256+0 records in 00:06:24.622 256+0 records out 00:06:24.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231745 s, 45.2 MB/s 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@51 -- # local i 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.622 10:36:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.879 10:36:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.879 10:36:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.879 10:36:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.879 10:36:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.879 10:36:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.879 10:36:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.879 10:36:41 -- bdev/nbd_common.sh@41 -- # break 00:06:24.879 10:36:41 -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.879 10:36:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.879 10:36:41 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.136 10:36:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.136 10:36:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.136 10:36:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.136 10:36:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.136 10:36:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.136 10:36:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.136 10:36:41 -- bdev/nbd_common.sh@41 -- # break 00:06:25.136 10:36:41 -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.136 10:36:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.136 10:36:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.136 10:36:41 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@65 -- # true 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.394 10:36:42 -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.394 10:36:42 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.652 10:36:42 -- event/event.sh@35 -- # sleep 3 00:06:25.910 [2024-07-10 10:36:42.591057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.910 [2024-07-10 10:36:42.678714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.910 [2024-07-10 10:36:42.678719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.168 [2024-07-10 10:36:42.739594] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.168 [2024-07-10 10:36:42.739653] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.694 10:36:45 -- event/event.sh@23 -- # for i in {0..2} 00:06:28.694 10:36:45 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:28.694 spdk_app_start Round 2 00:06:28.694 10:36:45 -- event/event.sh@25 -- # waitforlisten 3338301 /var/tmp/spdk-nbd.sock 00:06:28.694 10:36:45 -- common/autotest_common.sh@819 -- # '[' -z 3338301 ']' 00:06:28.694 10:36:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.694 10:36:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.694 10:36:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.694 10:36:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.694 10:36:45 -- common/autotest_common.sh@10 -- # set +x 00:06:28.952 10:36:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.952 10:36:45 -- common/autotest_common.sh@852 -- # return 0 00:06:28.952 10:36:45 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.210 Malloc0 00:06:29.210 10:36:45 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.468 Malloc1 00:06:29.468 10:36:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@12 -- # local i 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.468 10:36:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.726 /dev/nbd0 00:06:29.726 10:36:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.726 10:36:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.726 10:36:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:29.726 10:36:46 -- common/autotest_common.sh@857 -- # local i 00:06:29.726 10:36:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:29.726 10:36:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:29.726 10:36:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:29.726 10:36:46 -- common/autotest_common.sh@861 -- # break 00:06:29.726 10:36:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:29.726 10:36:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:29.726 10:36:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.726 1+0 records in 00:06:29.726 1+0 records out 00:06:29.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215349 s, 19.0 MB/s 00:06:29.726 10:36:46 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.726 10:36:46 -- common/autotest_common.sh@874 -- # size=4096 00:06:29.726 10:36:46 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.726 10:36:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:29.726 10:36:46 -- common/autotest_common.sh@877 -- # return 0 00:06:29.726 10:36:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.726 10:36:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.726 10:36:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.984 /dev/nbd1 00:06:29.984 10:36:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.984 10:36:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.984 10:36:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:29.984 10:36:46 -- common/autotest_common.sh@857 -- # local i 00:06:29.984 10:36:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:29.984 10:36:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:29.984 10:36:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:29.984 10:36:46 -- common/autotest_common.sh@861 -- # break 00:06:29.984 10:36:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:29.984 10:36:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:29.984 10:36:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.984 1+0 records in 00:06:29.984 1+0 records out 00:06:29.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213172 s, 19.2 MB/s 00:06:29.984 10:36:46 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.984 10:36:46 -- common/autotest_common.sh@874 -- # size=4096 00:06:29.984 10:36:46 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.984 10:36:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:29.984 10:36:46 -- common/autotest_common.sh@877 -- # return 0 00:06:29.984 10:36:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.984 10:36:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.984 10:36:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.984 10:36:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.984 10:36:46 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.243 { 00:06:30.243 "nbd_device": "/dev/nbd0", 00:06:30.243 "bdev_name": "Malloc0" 00:06:30.243 }, 00:06:30.243 { 00:06:30.243 "nbd_device": "/dev/nbd1", 00:06:30.243 "bdev_name": "Malloc1" 00:06:30.243 } 00:06:30.243 ]' 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.243 { 00:06:30.243 "nbd_device": "/dev/nbd0", 00:06:30.243 "bdev_name": "Malloc0" 00:06:30.243 }, 00:06:30.243 { 00:06:30.243 "nbd_device": "/dev/nbd1", 00:06:30.243 "bdev_name": "Malloc1" 00:06:30.243 } 00:06:30.243 ]' 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.243 /dev/nbd1' 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.243 /dev/nbd1' 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.243 256+0 records in 00:06:30.243 256+0 records out 00:06:30.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045033 s, 233 MB/s 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.243 256+0 records in 00:06:30.243 256+0 records out 00:06:30.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243329 s, 43.1 MB/s 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.243 256+0 records in 00:06:30.243 256+0 records out 00:06:30.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229147 s, 45.8 MB/s 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@51 -- # local i 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.243 10:36:46 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.501 10:36:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.501 10:36:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.501 10:36:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.501 10:36:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.501 10:36:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.501 10:36:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.501 10:36:47 -- bdev/nbd_common.sh@41 -- # break 00:06:30.501 10:36:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.501 10:36:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.501 10:36:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.759 10:36:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.759 10:36:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.759 10:36:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.759 10:36:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.759 10:36:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.759 10:36:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.759 10:36:47 -- bdev/nbd_common.sh@41 -- # break 00:06:30.759 10:36:47 -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.759 10:36:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.759 10:36:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.759 10:36:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@65 -- # true 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.017 10:36:47 -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.017 10:36:47 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.274 10:36:48 -- event/event.sh@35 -- # sleep 3 00:06:31.532 [2024-07-10 10:36:48.277410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.791 [2024-07-10 10:36:48.366035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.791 [2024-07-10 10:36:48.366041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.791 [2024-07-10 10:36:48.427157] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.791 [2024-07-10 10:36:48.427232] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.317 10:36:51 -- event/event.sh@38 -- # waitforlisten 3338301 /var/tmp/spdk-nbd.sock 00:06:34.318 10:36:51 -- common/autotest_common.sh@819 -- # '[' -z 3338301 ']' 00:06:34.318 10:36:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.318 10:36:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.318 10:36:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.318 10:36:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.318 10:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:34.576 10:36:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.576 10:36:51 -- common/autotest_common.sh@852 -- # return 0 00:06:34.576 10:36:51 -- event/event.sh@39 -- # killprocess 3338301 00:06:34.576 10:36:51 -- common/autotest_common.sh@926 -- # '[' -z 3338301 ']' 00:06:34.576 10:36:51 -- common/autotest_common.sh@930 -- # kill -0 3338301 00:06:34.576 10:36:51 -- common/autotest_common.sh@931 -- # uname 00:06:34.576 10:36:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.576 10:36:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3338301 00:06:34.576 10:36:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.576 10:36:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.576 10:36:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3338301' 00:06:34.576 killing process with pid 3338301 00:06:34.576 10:36:51 -- common/autotest_common.sh@945 -- # kill 3338301 00:06:34.576 10:36:51 -- common/autotest_common.sh@950 -- # wait 3338301 00:06:34.834 spdk_app_start is called in Round 0. 00:06:34.834 Shutdown signal received, stop current app iteration 00:06:34.834 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:34.834 spdk_app_start is called in Round 1. 00:06:34.834 Shutdown signal received, stop current app iteration 00:06:34.834 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:34.834 spdk_app_start is called in Round 2. 00:06:34.834 Shutdown signal received, stop current app iteration 00:06:34.834 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:34.834 spdk_app_start is called in Round 3. 00:06:34.834 Shutdown signal received, stop current app iteration 00:06:34.834 10:36:51 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:34.834 10:36:51 -- event/event.sh@42 -- # return 0 00:06:34.834 00:06:34.834 real 0m17.681s 00:06:34.834 user 0m38.338s 00:06:34.834 sys 0m3.215s 00:06:34.834 10:36:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.834 10:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:34.834 ************************************ 00:06:34.834 END TEST app_repeat 00:06:34.834 ************************************ 00:06:34.834 10:36:51 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:34.834 10:36:51 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:34.834 10:36:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:34.834 10:36:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.834 10:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:34.834 ************************************ 00:06:34.834 START TEST cpu_locks 00:06:34.834 ************************************ 00:06:34.834 10:36:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:34.834 * Looking for test storage... 00:06:34.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:34.834 10:36:51 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:34.834 10:36:51 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:34.834 10:36:51 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:34.834 10:36:51 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:34.834 10:36:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:34.834 10:36:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.834 10:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:34.834 ************************************ 00:06:34.834 START TEST default_locks 00:06:34.834 ************************************ 00:06:34.834 10:36:51 -- common/autotest_common.sh@1104 -- # default_locks 00:06:34.834 10:36:51 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3340710 00:06:34.834 10:36:51 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.834 10:36:51 -- event/cpu_locks.sh@47 -- # waitforlisten 3340710 00:06:34.834 10:36:51 -- common/autotest_common.sh@819 -- # '[' -z 3340710 ']' 00:06:34.834 10:36:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.834 10:36:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.834 10:36:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.834 10:36:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.834 10:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:34.834 [2024-07-10 10:36:51.638490] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:34.834 [2024-07-10 10:36:51.638577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340710 ] 00:06:35.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.093 [2024-07-10 10:36:51.695714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.093 [2024-07-10 10:36:51.777224] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.093 [2024-07-10 10:36:51.777386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.026 10:36:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.026 10:36:52 -- common/autotest_common.sh@852 -- # return 0 00:06:36.026 10:36:52 -- event/cpu_locks.sh@49 -- # locks_exist 3340710 00:06:36.026 10:36:52 -- event/cpu_locks.sh@22 -- # lslocks -p 3340710 00:06:36.026 10:36:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.285 lslocks: write error 00:06:36.285 10:36:52 -- event/cpu_locks.sh@50 -- # killprocess 3340710 00:06:36.285 10:36:52 -- common/autotest_common.sh@926 -- # '[' -z 3340710 ']' 00:06:36.285 10:36:52 -- common/autotest_common.sh@930 -- # kill -0 3340710 00:06:36.285 10:36:52 -- common/autotest_common.sh@931 -- # uname 00:06:36.285 10:36:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.285 10:36:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3340710 00:06:36.285 10:36:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.285 10:36:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.285 10:36:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3340710' 00:06:36.285 killing process with pid 3340710 00:06:36.285 10:36:52 -- common/autotest_common.sh@945 -- # kill 3340710 00:06:36.285 10:36:52 -- common/autotest_common.sh@950 -- # wait 3340710 00:06:36.543 10:36:53 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3340710 00:06:36.543 10:36:53 -- common/autotest_common.sh@640 -- # local es=0 00:06:36.543 10:36:53 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3340710 00:06:36.543 10:36:53 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:36.543 10:36:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.543 10:36:53 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:36.543 10:36:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.543 10:36:53 -- common/autotest_common.sh@643 -- # waitforlisten 3340710 00:06:36.543 10:36:53 -- common/autotest_common.sh@819 -- # '[' -z 3340710 ']' 00:06:36.543 10:36:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.543 10:36:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.543 10:36:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.543 10:36:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.543 10:36:53 -- common/autotest_common.sh@10 -- # set +x 00:06:36.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3340710) - No such process 00:06:36.543 ERROR: process (pid: 3340710) is no longer running 00:06:36.543 10:36:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.543 10:36:53 -- common/autotest_common.sh@852 -- # return 1 00:06:36.543 10:36:53 -- common/autotest_common.sh@643 -- # es=1 00:06:36.543 10:36:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:36.543 10:36:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:36.543 10:36:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:36.543 10:36:53 -- event/cpu_locks.sh@54 -- # no_locks 00:06:36.543 10:36:53 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.543 10:36:53 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.543 10:36:53 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.543 00:06:36.543 real 0m1.770s 00:06:36.543 user 0m1.870s 00:06:36.543 sys 0m0.580s 00:06:36.543 10:36:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.543 10:36:53 -- common/autotest_common.sh@10 -- # set +x 00:06:36.543 ************************************ 00:06:36.543 END TEST default_locks 00:06:36.543 ************************************ 00:06:36.801 10:36:53 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:36.801 10:36:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.801 10:36:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.801 10:36:53 -- common/autotest_common.sh@10 -- # set +x 00:06:36.801 ************************************ 00:06:36.801 START TEST default_locks_via_rpc 00:06:36.801 ************************************ 00:06:36.801 10:36:53 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:36.801 10:36:53 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3340883 00:06:36.801 10:36:53 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.801 10:36:53 -- event/cpu_locks.sh@63 -- # waitforlisten 3340883 00:06:36.801 10:36:53 -- common/autotest_common.sh@819 -- # '[' -z 3340883 ']' 00:06:36.801 10:36:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.801 10:36:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.801 10:36:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.801 10:36:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.801 10:36:53 -- common/autotest_common.sh@10 -- # set +x 00:06:36.801 [2024-07-10 10:36:53.438855] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:36.801 [2024-07-10 10:36:53.438965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340883 ] 00:06:36.801 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.801 [2024-07-10 10:36:53.500593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.801 [2024-07-10 10:36:53.587894] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.801 [2024-07-10 10:36:53.588073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.733 10:36:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.733 10:36:54 -- common/autotest_common.sh@852 -- # return 0 00:06:37.733 10:36:54 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:37.733 10:36:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:37.733 10:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:37.733 10:36:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:37.733 10:36:54 -- event/cpu_locks.sh@67 -- # no_locks 00:06:37.733 10:36:54 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.733 10:36:54 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.733 10:36:54 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.733 10:36:54 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.733 10:36:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:37.733 10:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:37.733 10:36:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:37.733 10:36:54 -- event/cpu_locks.sh@71 -- # locks_exist 3340883 00:06:37.733 10:36:54 -- event/cpu_locks.sh@22 -- # lslocks -p 3340883 00:06:37.733 10:36:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.992 10:36:54 -- event/cpu_locks.sh@73 -- # killprocess 3340883 00:06:37.992 10:36:54 -- common/autotest_common.sh@926 -- # '[' -z 3340883 ']' 00:06:37.992 10:36:54 -- common/autotest_common.sh@930 -- # kill -0 3340883 00:06:37.992 10:36:54 -- common/autotest_common.sh@931 -- # uname 00:06:37.992 10:36:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.992 10:36:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3340883 00:06:37.992 10:36:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.992 10:36:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.992 10:36:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3340883' 00:06:37.992 killing process with pid 3340883 00:06:37.992 10:36:54 -- common/autotest_common.sh@945 -- # kill 3340883 00:06:37.992 10:36:54 -- common/autotest_common.sh@950 -- # wait 3340883 00:06:38.250 00:06:38.250 real 0m1.642s 00:06:38.250 user 0m1.757s 00:06:38.250 sys 0m0.532s 00:06:38.250 10:36:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.250 10:36:55 -- common/autotest_common.sh@10 -- # set +x 00:06:38.250 ************************************ 00:06:38.250 END TEST default_locks_via_rpc 00:06:38.250 ************************************ 00:06:38.250 10:36:55 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:38.250 10:36:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:38.250 10:36:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.250 10:36:55 -- common/autotest_common.sh@10 -- # set +x 00:06:38.250 ************************************ 00:06:38.250 START TEST non_locking_app_on_locked_coremask 00:06:38.250 ************************************ 00:06:38.250 10:36:55 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:38.250 10:36:55 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3341182 00:06:38.250 10:36:55 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.250 10:36:55 -- event/cpu_locks.sh@81 -- # waitforlisten 3341182 /var/tmp/spdk.sock 00:06:38.250 10:36:55 -- common/autotest_common.sh@819 -- # '[' -z 3341182 ']' 00:06:38.250 10:36:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.250 10:36:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.250 10:36:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.250 10:36:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.250 10:36:55 -- common/autotest_common.sh@10 -- # set +x 00:06:38.509 [2024-07-10 10:36:55.103678] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:38.509 [2024-07-10 10:36:55.103773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341182 ] 00:06:38.509 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.509 [2024-07-10 10:36:55.159919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.509 [2024-07-10 10:36:55.247768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.509 [2024-07-10 10:36:55.247932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.445 10:36:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.445 10:36:56 -- common/autotest_common.sh@852 -- # return 0 00:06:39.445 10:36:56 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3341322 00:06:39.445 10:36:56 -- event/cpu_locks.sh@85 -- # waitforlisten 3341322 /var/tmp/spdk2.sock 00:06:39.445 10:36:56 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:39.445 10:36:56 -- common/autotest_common.sh@819 -- # '[' -z 3341322 ']' 00:06:39.445 10:36:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.445 10:36:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.445 10:36:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.445 10:36:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.445 10:36:56 -- common/autotest_common.sh@10 -- # set +x 00:06:39.445 [2024-07-10 10:36:56.116500] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:39.445 [2024-07-10 10:36:56.116602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341322 ] 00:06:39.445 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.445 [2024-07-10 10:36:56.214718] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.445 [2024-07-10 10:36:56.214758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.703 [2024-07-10 10:36:56.393219] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.703 [2024-07-10 10:36:56.393408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.269 10:36:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.269 10:36:57 -- common/autotest_common.sh@852 -- # return 0 00:06:40.269 10:36:57 -- event/cpu_locks.sh@87 -- # locks_exist 3341182 00:06:40.269 10:36:57 -- event/cpu_locks.sh@22 -- # lslocks -p 3341182 00:06:40.269 10:36:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.835 lslocks: write error 00:06:40.835 10:36:57 -- event/cpu_locks.sh@89 -- # killprocess 3341182 00:06:40.835 10:36:57 -- common/autotest_common.sh@926 -- # '[' -z 3341182 ']' 00:06:40.835 10:36:57 -- common/autotest_common.sh@930 -- # kill -0 3341182 00:06:40.835 10:36:57 -- common/autotest_common.sh@931 -- # uname 00:06:40.835 10:36:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.835 10:36:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3341182 00:06:40.835 10:36:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:40.835 10:36:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:40.835 10:36:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3341182' 00:06:40.835 killing process with pid 3341182 00:06:40.835 10:36:57 -- common/autotest_common.sh@945 -- # kill 3341182 00:06:40.835 10:36:57 -- common/autotest_common.sh@950 -- # wait 3341182 00:06:41.768 10:36:58 -- event/cpu_locks.sh@90 -- # killprocess 3341322 00:06:41.768 10:36:58 -- common/autotest_common.sh@926 -- # '[' -z 3341322 ']' 00:06:41.768 10:36:58 -- common/autotest_common.sh@930 -- # kill -0 3341322 00:06:41.768 10:36:58 -- common/autotest_common.sh@931 -- # uname 00:06:41.768 10:36:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.768 10:36:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3341322 00:06:41.768 10:36:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.768 10:36:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.768 10:36:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3341322' 00:06:41.768 killing process with pid 3341322 00:06:41.768 10:36:58 -- common/autotest_common.sh@945 -- # kill 3341322 00:06:41.768 10:36:58 -- common/autotest_common.sh@950 -- # wait 3341322 00:06:42.025 00:06:42.025 real 0m3.704s 00:06:42.025 user 0m4.047s 00:06:42.025 sys 0m1.082s 00:06:42.025 10:36:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.025 10:36:58 -- common/autotest_common.sh@10 -- # set +x 00:06:42.025 ************************************ 00:06:42.025 END TEST non_locking_app_on_locked_coremask 00:06:42.025 ************************************ 00:06:42.025 10:36:58 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:42.025 10:36:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.025 10:36:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.025 10:36:58 -- common/autotest_common.sh@10 -- # set +x 00:06:42.025 ************************************ 00:06:42.025 START TEST locking_app_on_unlocked_coremask 00:06:42.025 ************************************ 00:06:42.025 10:36:58 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:42.025 10:36:58 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3341628 00:06:42.025 10:36:58 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:42.025 10:36:58 -- event/cpu_locks.sh@99 -- # waitforlisten 3341628 /var/tmp/spdk.sock 00:06:42.025 10:36:58 -- common/autotest_common.sh@819 -- # '[' -z 3341628 ']' 00:06:42.025 10:36:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.025 10:36:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.025 10:36:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.025 10:36:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.025 10:36:58 -- common/autotest_common.sh@10 -- # set +x 00:06:42.025 [2024-07-10 10:36:58.838322] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:42.025 [2024-07-10 10:36:58.838417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341628 ] 00:06:42.283 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.283 [2024-07-10 10:36:58.895856] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.283 [2024-07-10 10:36:58.895893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.283 [2024-07-10 10:36:58.982680] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.283 [2024-07-10 10:36:58.982841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.215 10:36:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.215 10:36:59 -- common/autotest_common.sh@852 -- # return 0 00:06:43.215 10:36:59 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3341770 00:06:43.215 10:36:59 -- event/cpu_locks.sh@103 -- # waitforlisten 3341770 /var/tmp/spdk2.sock 00:06:43.215 10:36:59 -- common/autotest_common.sh@819 -- # '[' -z 3341770 ']' 00:06:43.215 10:36:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.215 10:36:59 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.215 10:36:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:43.215 10:36:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.215 10:36:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:43.215 10:36:59 -- common/autotest_common.sh@10 -- # set +x 00:06:43.215 [2024-07-10 10:36:59.807763] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:43.215 [2024-07-10 10:36:59.807858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3341770 ] 00:06:43.215 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.215 [2024-07-10 10:36:59.905163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.471 [2024-07-10 10:37:00.097788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:43.471 [2024-07-10 10:37:00.097971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.034 10:37:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:44.034 10:37:00 -- common/autotest_common.sh@852 -- # return 0 00:06:44.034 10:37:00 -- event/cpu_locks.sh@105 -- # locks_exist 3341770 00:06:44.034 10:37:00 -- event/cpu_locks.sh@22 -- # lslocks -p 3341770 00:06:44.034 10:37:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.597 lslocks: write error 00:06:44.597 10:37:01 -- event/cpu_locks.sh@107 -- # killprocess 3341628 00:06:44.597 10:37:01 -- common/autotest_common.sh@926 -- # '[' -z 3341628 ']' 00:06:44.597 10:37:01 -- common/autotest_common.sh@930 -- # kill -0 3341628 00:06:44.597 10:37:01 -- common/autotest_common.sh@931 -- # uname 00:06:44.597 10:37:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:44.597 10:37:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3341628 00:06:44.597 10:37:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:44.597 10:37:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:44.597 10:37:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3341628' 00:06:44.597 killing process with pid 3341628 00:06:44.597 10:37:01 -- common/autotest_common.sh@945 -- # kill 3341628 00:06:44.597 10:37:01 -- common/autotest_common.sh@950 -- # wait 3341628 00:06:45.531 10:37:02 -- event/cpu_locks.sh@108 -- # killprocess 3341770 00:06:45.531 10:37:02 -- common/autotest_common.sh@926 -- # '[' -z 3341770 ']' 00:06:45.531 10:37:02 -- common/autotest_common.sh@930 -- # kill -0 3341770 00:06:45.531 10:37:02 -- common/autotest_common.sh@931 -- # uname 00:06:45.531 10:37:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.531 10:37:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3341770 00:06:45.531 10:37:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:45.531 10:37:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:45.531 10:37:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3341770' 00:06:45.531 killing process with pid 3341770 00:06:45.531 10:37:02 -- common/autotest_common.sh@945 -- # kill 3341770 00:06:45.531 10:37:02 -- common/autotest_common.sh@950 -- # wait 3341770 00:06:45.789 00:06:45.789 real 0m3.743s 00:06:45.789 user 0m4.026s 00:06:45.789 sys 0m1.112s 00:06:45.789 10:37:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.789 10:37:02 -- common/autotest_common.sh@10 -- # set +x 00:06:45.789 ************************************ 00:06:45.789 END TEST locking_app_on_unlocked_coremask 00:06:45.789 ************************************ 00:06:45.789 10:37:02 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:45.789 10:37:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.789 10:37:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.789 10:37:02 -- common/autotest_common.sh@10 -- # set +x 00:06:45.789 ************************************ 00:06:45.789 START TEST locking_app_on_locked_coremask 00:06:45.789 ************************************ 00:06:45.789 10:37:02 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:45.789 10:37:02 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3342082 00:06:45.789 10:37:02 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.789 10:37:02 -- event/cpu_locks.sh@116 -- # waitforlisten 3342082 /var/tmp/spdk.sock 00:06:45.789 10:37:02 -- common/autotest_common.sh@819 -- # '[' -z 3342082 ']' 00:06:45.789 10:37:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.789 10:37:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.789 10:37:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.789 10:37:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.789 10:37:02 -- common/autotest_common.sh@10 -- # set +x 00:06:45.789 [2024-07-10 10:37:02.611997] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:45.789 [2024-07-10 10:37:02.612098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342082 ] 00:06:46.047 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.047 [2024-07-10 10:37:02.676085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.047 [2024-07-10 10:37:02.770896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.047 [2024-07-10 10:37:02.771079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.990 10:37:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.990 10:37:03 -- common/autotest_common.sh@852 -- # return 0 00:06:46.990 10:37:03 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3342222 00:06:46.990 10:37:03 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.990 10:37:03 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3342222 /var/tmp/spdk2.sock 00:06:46.990 10:37:03 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.990 10:37:03 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3342222 /var/tmp/spdk2.sock 00:06:46.990 10:37:03 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:46.990 10:37:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.990 10:37:03 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:46.990 10:37:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.990 10:37:03 -- common/autotest_common.sh@643 -- # waitforlisten 3342222 /var/tmp/spdk2.sock 00:06:46.990 10:37:03 -- common/autotest_common.sh@819 -- # '[' -z 3342222 ']' 00:06:46.990 10:37:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.990 10:37:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.990 10:37:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.990 10:37:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.990 10:37:03 -- common/autotest_common.sh@10 -- # set +x 00:06:46.990 [2024-07-10 10:37:03.582680] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:46.990 [2024-07-10 10:37:03.582775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342222 ] 00:06:46.990 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.990 [2024-07-10 10:37:03.679060] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3342082 has claimed it. 00:06:46.990 [2024-07-10 10:37:03.679125] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3342222) - No such process 00:06:47.555 ERROR: process (pid: 3342222) is no longer running 00:06:47.555 10:37:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:47.555 10:37:04 -- common/autotest_common.sh@852 -- # return 1 00:06:47.555 10:37:04 -- common/autotest_common.sh@643 -- # es=1 00:06:47.555 10:37:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.555 10:37:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:47.555 10:37:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.555 10:37:04 -- event/cpu_locks.sh@122 -- # locks_exist 3342082 00:06:47.555 10:37:04 -- event/cpu_locks.sh@22 -- # lslocks -p 3342082 00:06:47.555 10:37:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.120 lslocks: write error 00:06:48.120 10:37:04 -- event/cpu_locks.sh@124 -- # killprocess 3342082 00:06:48.120 10:37:04 -- common/autotest_common.sh@926 -- # '[' -z 3342082 ']' 00:06:48.120 10:37:04 -- common/autotest_common.sh@930 -- # kill -0 3342082 00:06:48.120 10:37:04 -- common/autotest_common.sh@931 -- # uname 00:06:48.120 10:37:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:48.120 10:37:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3342082 00:06:48.120 10:37:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:48.120 10:37:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:48.120 10:37:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3342082' 00:06:48.120 killing process with pid 3342082 00:06:48.120 10:37:04 -- common/autotest_common.sh@945 -- # kill 3342082 00:06:48.120 10:37:04 -- common/autotest_common.sh@950 -- # wait 3342082 00:06:48.685 00:06:48.685 real 0m2.663s 00:06:48.685 user 0m3.010s 00:06:48.685 sys 0m0.727s 00:06:48.685 10:37:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.685 10:37:05 -- common/autotest_common.sh@10 -- # set +x 00:06:48.685 ************************************ 00:06:48.685 END TEST locking_app_on_locked_coremask 00:06:48.685 ************************************ 00:06:48.685 10:37:05 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:48.685 10:37:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.685 10:37:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.685 10:37:05 -- common/autotest_common.sh@10 -- # set +x 00:06:48.685 ************************************ 00:06:48.685 START TEST locking_overlapped_coremask 00:06:48.685 ************************************ 00:06:48.685 10:37:05 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:48.685 10:37:05 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3342520 00:06:48.685 10:37:05 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:48.685 10:37:05 -- event/cpu_locks.sh@133 -- # waitforlisten 3342520 /var/tmp/spdk.sock 00:06:48.685 10:37:05 -- common/autotest_common.sh@819 -- # '[' -z 3342520 ']' 00:06:48.685 10:37:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.685 10:37:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:48.685 10:37:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.685 10:37:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:48.685 10:37:05 -- common/autotest_common.sh@10 -- # set +x 00:06:48.685 [2024-07-10 10:37:05.298123] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:48.685 [2024-07-10 10:37:05.298202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342520 ] 00:06:48.685 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.685 [2024-07-10 10:37:05.363023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.685 [2024-07-10 10:37:05.452241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:48.685 [2024-07-10 10:37:05.452490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.685 [2024-07-10 10:37:05.452545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.685 [2024-07-10 10:37:05.452548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.618 10:37:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:49.619 10:37:06 -- common/autotest_common.sh@852 -- # return 0 00:06:49.619 10:37:06 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3342660 00:06:49.619 10:37:06 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3342660 /var/tmp/spdk2.sock 00:06:49.619 10:37:06 -- common/autotest_common.sh@640 -- # local es=0 00:06:49.619 10:37:06 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:49.619 10:37:06 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3342660 /var/tmp/spdk2.sock 00:06:49.619 10:37:06 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:49.619 10:37:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:49.619 10:37:06 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:49.619 10:37:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:49.619 10:37:06 -- common/autotest_common.sh@643 -- # waitforlisten 3342660 /var/tmp/spdk2.sock 00:06:49.619 10:37:06 -- common/autotest_common.sh@819 -- # '[' -z 3342660 ']' 00:06:49.619 10:37:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.619 10:37:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:49.619 10:37:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.619 10:37:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:49.619 10:37:06 -- common/autotest_common.sh@10 -- # set +x 00:06:49.619 [2024-07-10 10:37:06.276034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:49.619 [2024-07-10 10:37:06.276117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342660 ] 00:06:49.619 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.619 [2024-07-10 10:37:06.363823] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3342520 has claimed it. 00:06:49.619 [2024-07-10 10:37:06.363879] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3342660) - No such process 00:06:50.184 ERROR: process (pid: 3342660) is no longer running 00:06:50.184 10:37:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:50.184 10:37:06 -- common/autotest_common.sh@852 -- # return 1 00:06:50.184 10:37:06 -- common/autotest_common.sh@643 -- # es=1 00:06:50.184 10:37:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:50.184 10:37:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:50.184 10:37:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:50.184 10:37:06 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:50.184 10:37:06 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.184 10:37:06 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.184 10:37:06 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.184 10:37:06 -- event/cpu_locks.sh@141 -- # killprocess 3342520 00:06:50.184 10:37:06 -- common/autotest_common.sh@926 -- # '[' -z 3342520 ']' 00:06:50.184 10:37:06 -- common/autotest_common.sh@930 -- # kill -0 3342520 00:06:50.184 10:37:06 -- common/autotest_common.sh@931 -- # uname 00:06:50.184 10:37:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:50.184 10:37:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3342520 00:06:50.184 10:37:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:50.184 10:37:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:50.184 10:37:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3342520' 00:06:50.184 killing process with pid 3342520 00:06:50.184 10:37:06 -- common/autotest_common.sh@945 -- # kill 3342520 00:06:50.184 10:37:06 -- common/autotest_common.sh@950 -- # wait 3342520 00:06:50.750 00:06:50.750 real 0m2.125s 00:06:50.750 user 0m6.096s 00:06:50.750 sys 0m0.460s 00:06:50.750 10:37:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.750 10:37:07 -- common/autotest_common.sh@10 -- # set +x 00:06:50.750 ************************************ 00:06:50.750 END TEST locking_overlapped_coremask 00:06:50.750 ************************************ 00:06:50.750 10:37:07 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.750 10:37:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:50.750 10:37:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.750 10:37:07 -- common/autotest_common.sh@10 -- # set +x 00:06:50.750 ************************************ 00:06:50.750 START TEST locking_overlapped_coremask_via_rpc 00:06:50.750 ************************************ 00:06:50.750 10:37:07 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:50.750 10:37:07 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3342824 00:06:50.750 10:37:07 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.750 10:37:07 -- event/cpu_locks.sh@149 -- # waitforlisten 3342824 /var/tmp/spdk.sock 00:06:50.750 10:37:07 -- common/autotest_common.sh@819 -- # '[' -z 3342824 ']' 00:06:50.750 10:37:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.750 10:37:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:50.750 10:37:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.750 10:37:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:50.750 10:37:07 -- common/autotest_common.sh@10 -- # set +x 00:06:50.750 [2024-07-10 10:37:07.453134] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:50.751 [2024-07-10 10:37:07.453245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342824 ] 00:06:50.751 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.751 [2024-07-10 10:37:07.511667] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.751 [2024-07-10 10:37:07.511722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.008 [2024-07-10 10:37:07.603174] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.008 [2024-07-10 10:37:07.603365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.008 [2024-07-10 10:37:07.603457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.008 [2024-07-10 10:37:07.603453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.573 10:37:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.573 10:37:08 -- common/autotest_common.sh@852 -- # return 0 00:06:51.573 10:37:08 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3342967 00:06:51.573 10:37:08 -- event/cpu_locks.sh@153 -- # waitforlisten 3342967 /var/tmp/spdk2.sock 00:06:51.573 10:37:08 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.574 10:37:08 -- common/autotest_common.sh@819 -- # '[' -z 3342967 ']' 00:06:51.574 10:37:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.574 10:37:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.574 10:37:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.574 10:37:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.574 10:37:08 -- common/autotest_common.sh@10 -- # set +x 00:06:51.831 [2024-07-10 10:37:08.431050] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:51.831 [2024-07-10 10:37:08.431147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3342967 ] 00:06:51.831 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.831 [2024-07-10 10:37:08.519042] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.831 [2024-07-10 10:37:08.519082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.089 [2024-07-10 10:37:08.688585] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:52.089 [2024-07-10 10:37:08.688777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.089 [2024-07-10 10:37:08.692527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.089 [2024-07-10 10:37:08.692530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.654 10:37:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.654 10:37:09 -- common/autotest_common.sh@852 -- # return 0 00:06:52.654 10:37:09 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.654 10:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.654 10:37:09 -- common/autotest_common.sh@10 -- # set +x 00:06:52.654 10:37:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.654 10:37:09 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.654 10:37:09 -- common/autotest_common.sh@640 -- # local es=0 00:06:52.654 10:37:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.654 10:37:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:52.654 10:37:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.654 10:37:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:52.654 10:37:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.654 10:37:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.654 10:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.654 10:37:09 -- common/autotest_common.sh@10 -- # set +x 00:06:52.654 [2024-07-10 10:37:09.351532] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3342824 has claimed it. 00:06:52.654 request: 00:06:52.654 { 00:06:52.654 "method": "framework_enable_cpumask_locks", 00:06:52.654 "req_id": 1 00:06:52.654 } 00:06:52.654 Got JSON-RPC error response 00:06:52.654 response: 00:06:52.654 { 00:06:52.654 "code": -32603, 00:06:52.654 "message": "Failed to claim CPU core: 2" 00:06:52.654 } 00:06:52.654 10:37:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:52.654 10:37:09 -- common/autotest_common.sh@643 -- # es=1 00:06:52.655 10:37:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:52.655 10:37:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:52.655 10:37:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:52.655 10:37:09 -- event/cpu_locks.sh@158 -- # waitforlisten 3342824 /var/tmp/spdk.sock 00:06:52.655 10:37:09 -- common/autotest_common.sh@819 -- # '[' -z 3342824 ']' 00:06:52.655 10:37:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.655 10:37:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:52.655 10:37:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.655 10:37:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:52.655 10:37:09 -- common/autotest_common.sh@10 -- # set +x 00:06:52.912 10:37:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.912 10:37:09 -- common/autotest_common.sh@852 -- # return 0 00:06:52.912 10:37:09 -- event/cpu_locks.sh@159 -- # waitforlisten 3342967 /var/tmp/spdk2.sock 00:06:52.912 10:37:09 -- common/autotest_common.sh@819 -- # '[' -z 3342967 ']' 00:06:52.912 10:37:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.912 10:37:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:52.912 10:37:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.912 10:37:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:52.912 10:37:09 -- common/autotest_common.sh@10 -- # set +x 00:06:53.171 10:37:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:53.171 10:37:09 -- common/autotest_common.sh@852 -- # return 0 00:06:53.171 10:37:09 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.171 10:37:09 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.171 10:37:09 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.171 10:37:09 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.171 00:06:53.171 real 0m2.447s 00:06:53.171 user 0m1.165s 00:06:53.171 sys 0m0.216s 00:06:53.171 10:37:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.171 10:37:09 -- common/autotest_common.sh@10 -- # set +x 00:06:53.171 ************************************ 00:06:53.171 END TEST locking_overlapped_coremask_via_rpc 00:06:53.171 ************************************ 00:06:53.171 10:37:09 -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.171 10:37:09 -- event/cpu_locks.sh@15 -- # [[ -z 3342824 ]] 00:06:53.171 10:37:09 -- event/cpu_locks.sh@15 -- # killprocess 3342824 00:06:53.171 10:37:09 -- common/autotest_common.sh@926 -- # '[' -z 3342824 ']' 00:06:53.171 10:37:09 -- common/autotest_common.sh@930 -- # kill -0 3342824 00:06:53.171 10:37:09 -- common/autotest_common.sh@931 -- # uname 00:06:53.171 10:37:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:53.171 10:37:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3342824 00:06:53.171 10:37:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:53.171 10:37:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:53.171 10:37:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3342824' 00:06:53.171 killing process with pid 3342824 00:06:53.171 10:37:09 -- common/autotest_common.sh@945 -- # kill 3342824 00:06:53.171 10:37:09 -- common/autotest_common.sh@950 -- # wait 3342824 00:06:53.737 10:37:10 -- event/cpu_locks.sh@16 -- # [[ -z 3342967 ]] 00:06:53.737 10:37:10 -- event/cpu_locks.sh@16 -- # killprocess 3342967 00:06:53.737 10:37:10 -- common/autotest_common.sh@926 -- # '[' -z 3342967 ']' 00:06:53.737 10:37:10 -- common/autotest_common.sh@930 -- # kill -0 3342967 00:06:53.737 10:37:10 -- common/autotest_common.sh@931 -- # uname 00:06:53.737 10:37:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:53.737 10:37:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3342967 00:06:53.737 10:37:10 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:53.737 10:37:10 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:53.737 10:37:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3342967' 00:06:53.737 killing process with pid 3342967 00:06:53.737 10:37:10 -- common/autotest_common.sh@945 -- # kill 3342967 00:06:53.737 10:37:10 -- common/autotest_common.sh@950 -- # wait 3342967 00:06:53.995 10:37:10 -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.995 10:37:10 -- event/cpu_locks.sh@1 -- # cleanup 00:06:53.995 10:37:10 -- event/cpu_locks.sh@15 -- # [[ -z 3342824 ]] 00:06:53.995 10:37:10 -- event/cpu_locks.sh@15 -- # killprocess 3342824 00:06:53.995 10:37:10 -- common/autotest_common.sh@926 -- # '[' -z 3342824 ']' 00:06:53.995 10:37:10 -- common/autotest_common.sh@930 -- # kill -0 3342824 00:06:53.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3342824) - No such process 00:06:53.995 10:37:10 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3342824 is not found' 00:06:53.995 Process with pid 3342824 is not found 00:06:53.995 10:37:10 -- event/cpu_locks.sh@16 -- # [[ -z 3342967 ]] 00:06:53.995 10:37:10 -- event/cpu_locks.sh@16 -- # killprocess 3342967 00:06:53.995 10:37:10 -- common/autotest_common.sh@926 -- # '[' -z 3342967 ']' 00:06:53.995 10:37:10 -- common/autotest_common.sh@930 -- # kill -0 3342967 00:06:53.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3342967) - No such process 00:06:53.995 10:37:10 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3342967 is not found' 00:06:53.995 Process with pid 3342967 is not found 00:06:53.995 10:37:10 -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.995 00:06:53.995 real 0m19.194s 00:06:53.995 user 0m34.016s 00:06:53.995 sys 0m5.531s 00:06:53.995 10:37:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.995 10:37:10 -- common/autotest_common.sh@10 -- # set +x 00:06:53.995 ************************************ 00:06:53.995 END TEST cpu_locks 00:06:53.995 ************************************ 00:06:53.995 00:06:53.995 real 0m44.614s 00:06:53.995 user 1m24.670s 00:06:53.995 sys 0m9.453s 00:06:53.995 10:37:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.995 10:37:10 -- common/autotest_common.sh@10 -- # set +x 00:06:53.995 ************************************ 00:06:53.995 END TEST event 00:06:53.995 ************************************ 00:06:53.995 10:37:10 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.995 10:37:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:53.995 10:37:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.995 10:37:10 -- common/autotest_common.sh@10 -- # set +x 00:06:53.995 ************************************ 00:06:53.995 START TEST thread 00:06:53.995 ************************************ 00:06:53.995 10:37:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:54.253 * Looking for test storage... 00:06:54.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:54.253 10:37:10 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.253 10:37:10 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:54.253 10:37:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.253 10:37:10 -- common/autotest_common.sh@10 -- # set +x 00:06:54.253 ************************************ 00:06:54.253 START TEST thread_poller_perf 00:06:54.253 ************************************ 00:06:54.253 10:37:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.253 [2024-07-10 10:37:10.844021] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:54.253 [2024-07-10 10:37:10.844110] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343334 ] 00:06:54.253 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.253 [2024-07-10 10:37:10.906857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.253 [2024-07-10 10:37:11.001406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.253 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.626 ====================================== 00:06:55.626 busy:2719329718 (cyc) 00:06:55.626 total_run_count: 281000 00:06:55.626 tsc_hz: 2700000000 (cyc) 00:06:55.626 ====================================== 00:06:55.626 poller_cost: 9677 (cyc), 3584 (nsec) 00:06:55.626 00:06:55.626 real 0m1.263s 00:06:55.626 user 0m1.164s 00:06:55.626 sys 0m0.092s 00:06:55.626 10:37:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.626 10:37:12 -- common/autotest_common.sh@10 -- # set +x 00:06:55.626 ************************************ 00:06:55.626 END TEST thread_poller_perf 00:06:55.626 ************************************ 00:06:55.626 10:37:12 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.626 10:37:12 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:55.626 10:37:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.626 10:37:12 -- common/autotest_common.sh@10 -- # set +x 00:06:55.626 ************************************ 00:06:55.626 START TEST thread_poller_perf 00:06:55.626 ************************************ 00:06:55.626 10:37:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.626 [2024-07-10 10:37:12.131792] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:55.626 [2024-07-10 10:37:12.131872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343497 ] 00:06:55.626 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.626 [2024-07-10 10:37:12.195886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.626 [2024-07-10 10:37:12.285435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.626 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.560 ====================================== 00:06:56.560 busy:2703805241 (cyc) 00:06:56.560 total_run_count: 3884000 00:06:56.560 tsc_hz: 2700000000 (cyc) 00:06:56.560 ====================================== 00:06:56.560 poller_cost: 696 (cyc), 257 (nsec) 00:06:56.560 00:06:56.560 real 0m1.244s 00:06:56.560 user 0m1.154s 00:06:56.560 sys 0m0.082s 00:06:56.560 10:37:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.560 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:56.560 ************************************ 00:06:56.560 END TEST thread_poller_perf 00:06:56.560 ************************************ 00:06:56.818 10:37:13 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:56.818 00:06:56.818 real 0m2.610s 00:06:56.818 user 0m2.359s 00:06:56.818 sys 0m0.251s 00:06:56.818 10:37:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.818 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:56.818 ************************************ 00:06:56.818 END TEST thread 00:06:56.818 ************************************ 00:06:56.818 10:37:13 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:56.818 10:37:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:56.818 10:37:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.818 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:56.818 ************************************ 00:06:56.818 START TEST accel 00:06:56.818 ************************************ 00:06:56.818 10:37:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:56.818 * Looking for test storage... 00:06:56.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:56.818 10:37:13 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:56.818 10:37:13 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:56.818 10:37:13 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:56.818 10:37:13 -- accel/accel.sh@59 -- # spdk_tgt_pid=3343692 00:06:56.818 10:37:13 -- accel/accel.sh@60 -- # waitforlisten 3343692 00:06:56.818 10:37:13 -- common/autotest_common.sh@819 -- # '[' -z 3343692 ']' 00:06:56.818 10:37:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.818 10:37:13 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:56.818 10:37:13 -- accel/accel.sh@58 -- # build_accel_config 00:06:56.818 10:37:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:56.818 10:37:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.818 10:37:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.819 10:37:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.819 10:37:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:56.819 10:37:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.819 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:56.819 10:37:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.819 10:37:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.819 10:37:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.819 10:37:13 -- accel/accel.sh@42 -- # jq -r . 00:06:56.819 [2024-07-10 10:37:13.510317] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:56.819 [2024-07-10 10:37:13.510390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343692 ] 00:06:56.819 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.819 [2024-07-10 10:37:13.569582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.077 [2024-07-10 10:37:13.657815] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:57.077 [2024-07-10 10:37:13.657986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.643 10:37:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:57.643 10:37:14 -- common/autotest_common.sh@852 -- # return 0 00:06:57.643 10:37:14 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:57.643 10:37:14 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:57.643 10:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.643 10:37:14 -- common/autotest_common.sh@10 -- # set +x 00:06:57.643 10:37:14 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:57.643 10:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.643 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.643 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.643 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.643 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.643 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.643 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.643 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.643 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.643 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.643 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.643 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.643 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.643 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.643 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.643 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.644 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.644 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.644 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.644 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.644 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.644 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.902 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.902 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.902 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.902 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.902 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.902 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.902 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.902 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.902 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.902 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.902 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.902 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.902 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.902 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.902 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.902 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.902 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.902 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.902 10:37:14 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.902 10:37:14 -- accel/accel.sh@64 -- # IFS== 00:06:57.902 10:37:14 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.902 10:37:14 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.902 10:37:14 -- accel/accel.sh@67 -- # killprocess 3343692 00:06:57.902 10:37:14 -- common/autotest_common.sh@926 -- # '[' -z 3343692 ']' 00:06:57.902 10:37:14 -- common/autotest_common.sh@930 -- # kill -0 3343692 00:06:57.902 10:37:14 -- common/autotest_common.sh@931 -- # uname 00:06:57.902 10:37:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:57.902 10:37:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3343692 00:06:57.902 10:37:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:57.902 10:37:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:57.902 10:37:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3343692' 00:06:57.902 killing process with pid 3343692 00:06:57.902 10:37:14 -- common/autotest_common.sh@945 -- # kill 3343692 00:06:57.902 10:37:14 -- common/autotest_common.sh@950 -- # wait 3343692 00:06:58.160 10:37:14 -- accel/accel.sh@68 -- # trap - ERR 00:06:58.161 10:37:14 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:58.161 10:37:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:58.161 10:37:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.161 10:37:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.161 10:37:14 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:58.161 10:37:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:58.161 10:37:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.161 10:37:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.161 10:37:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.161 10:37:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.161 10:37:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.161 10:37:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.161 10:37:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.161 10:37:14 -- accel/accel.sh@42 -- # jq -r . 00:06:58.161 10:37:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.161 10:37:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.161 10:37:14 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:58.161 10:37:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:58.161 10:37:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.161 10:37:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.161 ************************************ 00:06:58.161 START TEST accel_missing_filename 00:06:58.161 ************************************ 00:06:58.161 10:37:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:58.161 10:37:14 -- common/autotest_common.sh@640 -- # local es=0 00:06:58.161 10:37:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:58.161 10:37:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:58.161 10:37:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.161 10:37:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:58.161 10:37:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.161 10:37:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:58.161 10:37:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:58.161 10:37:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.161 10:37:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.161 10:37:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.161 10:37:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.161 10:37:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.161 10:37:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.161 10:37:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.161 10:37:14 -- accel/accel.sh@42 -- # jq -r . 00:06:58.161 [2024-07-10 10:37:14.976504] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:58.161 [2024-07-10 10:37:14.976580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343864 ] 00:06:58.419 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.419 [2024-07-10 10:37:15.038037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.419 [2024-07-10 10:37:15.128963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.419 [2024-07-10 10:37:15.188527] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.677 [2024-07-10 10:37:15.265119] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:58.677 A filename is required. 00:06:58.677 10:37:15 -- common/autotest_common.sh@643 -- # es=234 00:06:58.677 10:37:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.677 10:37:15 -- common/autotest_common.sh@652 -- # es=106 00:06:58.677 10:37:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:58.677 10:37:15 -- common/autotest_common.sh@660 -- # es=1 00:06:58.677 10:37:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.677 00:06:58.677 real 0m0.386s 00:06:58.677 user 0m0.275s 00:06:58.677 sys 0m0.145s 00:06:58.677 10:37:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.677 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:58.677 ************************************ 00:06:58.677 END TEST accel_missing_filename 00:06:58.677 ************************************ 00:06:58.677 10:37:15 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.677 10:37:15 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:58.677 10:37:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.677 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:58.677 ************************************ 00:06:58.677 START TEST accel_compress_verify 00:06:58.677 ************************************ 00:06:58.677 10:37:15 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.677 10:37:15 -- common/autotest_common.sh@640 -- # local es=0 00:06:58.677 10:37:15 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.677 10:37:15 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:58.677 10:37:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.677 10:37:15 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:58.677 10:37:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.677 10:37:15 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.677 10:37:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.677 10:37:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.677 10:37:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.677 10:37:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.677 10:37:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.677 10:37:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.677 10:37:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.677 10:37:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.677 10:37:15 -- accel/accel.sh@42 -- # jq -r . 00:06:58.677 [2024-07-10 10:37:15.384000] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:58.677 [2024-07-10 10:37:15.384066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344013 ] 00:06:58.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.677 [2024-07-10 10:37:15.443936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.958 [2024-07-10 10:37:15.536327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.958 [2024-07-10 10:37:15.595142] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.958 [2024-07-10 10:37:15.677272] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:58.958 00:06:58.958 Compression does not support the verify option, aborting. 00:06:58.958 10:37:15 -- common/autotest_common.sh@643 -- # es=161 00:06:58.958 10:37:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.958 10:37:15 -- common/autotest_common.sh@652 -- # es=33 00:06:58.958 10:37:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:58.958 10:37:15 -- common/autotest_common.sh@660 -- # es=1 00:06:58.958 10:37:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.958 00:06:58.958 real 0m0.391s 00:06:58.958 user 0m0.281s 00:06:58.958 sys 0m0.140s 00:06:58.958 10:37:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.958 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:58.958 ************************************ 00:06:58.958 END TEST accel_compress_verify 00:06:58.958 ************************************ 00:06:59.239 10:37:15 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:59.239 10:37:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:59.239 10:37:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.239 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:59.239 ************************************ 00:06:59.239 START TEST accel_wrong_workload 00:06:59.239 ************************************ 00:06:59.239 10:37:15 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:59.239 10:37:15 -- common/autotest_common.sh@640 -- # local es=0 00:06:59.239 10:37:15 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:59.239 10:37:15 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:59.239 10:37:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.239 10:37:15 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:59.239 10:37:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.239 10:37:15 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:59.239 10:37:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:59.239 10:37:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.239 10:37:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.239 10:37:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.239 10:37:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.239 10:37:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.239 10:37:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.239 10:37:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.239 10:37:15 -- accel/accel.sh@42 -- # jq -r . 00:06:59.239 Unsupported workload type: foobar 00:06:59.239 [2024-07-10 10:37:15.805208] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:59.239 accel_perf options: 00:06:59.239 [-h help message] 00:06:59.239 [-q queue depth per core] 00:06:59.239 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:59.239 [-T number of threads per core 00:06:59.239 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:59.239 [-t time in seconds] 00:06:59.239 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:59.239 [ dif_verify, , dif_generate, dif_generate_copy 00:06:59.239 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:59.239 [-l for compress/decompress workloads, name of uncompressed input file 00:06:59.239 [-S for crc32c workload, use this seed value (default 0) 00:06:59.239 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:59.239 [-f for fill workload, use this BYTE value (default 255) 00:06:59.239 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:59.239 [-y verify result if this switch is on] 00:06:59.239 [-a tasks to allocate per core (default: same value as -q)] 00:06:59.239 Can be used to spread operations across a wider range of memory. 00:06:59.239 10:37:15 -- common/autotest_common.sh@643 -- # es=1 00:06:59.239 10:37:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:59.239 10:37:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:59.239 10:37:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:59.239 00:06:59.239 real 0m0.022s 00:06:59.239 user 0m0.011s 00:06:59.239 sys 0m0.012s 00:06:59.239 10:37:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.239 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:59.239 ************************************ 00:06:59.239 END TEST accel_wrong_workload 00:06:59.239 ************************************ 00:06:59.239 Error: writing output failed: Broken pipe 00:06:59.239 10:37:15 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:59.239 10:37:15 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:59.239 10:37:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.239 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:59.239 ************************************ 00:06:59.239 START TEST accel_negative_buffers 00:06:59.239 ************************************ 00:06:59.239 10:37:15 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:59.239 10:37:15 -- common/autotest_common.sh@640 -- # local es=0 00:06:59.239 10:37:15 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:59.240 10:37:15 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:59.240 10:37:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.240 10:37:15 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:59.240 10:37:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.240 10:37:15 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:59.240 10:37:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:59.240 10:37:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.240 10:37:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.240 10:37:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.240 10:37:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.240 10:37:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.240 10:37:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.240 10:37:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.240 10:37:15 -- accel/accel.sh@42 -- # jq -r . 00:06:59.240 -x option must be non-negative. 00:06:59.240 [2024-07-10 10:37:15.847200] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:59.240 accel_perf options: 00:06:59.240 [-h help message] 00:06:59.240 [-q queue depth per core] 00:06:59.240 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:59.240 [-T number of threads per core 00:06:59.240 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:59.240 [-t time in seconds] 00:06:59.240 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:59.240 [ dif_verify, , dif_generate, dif_generate_copy 00:06:59.240 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:59.240 [-l for compress/decompress workloads, name of uncompressed input file 00:06:59.240 [-S for crc32c workload, use this seed value (default 0) 00:06:59.240 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:59.240 [-f for fill workload, use this BYTE value (default 255) 00:06:59.240 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:59.240 [-y verify result if this switch is on] 00:06:59.240 [-a tasks to allocate per core (default: same value as -q)] 00:06:59.240 Can be used to spread operations across a wider range of memory. 00:06:59.240 10:37:15 -- common/autotest_common.sh@643 -- # es=1 00:06:59.240 10:37:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:59.240 10:37:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:59.240 10:37:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:59.240 00:06:59.240 real 0m0.019s 00:06:59.240 user 0m0.010s 00:06:59.240 sys 0m0.009s 00:06:59.240 10:37:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.240 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:59.240 ************************************ 00:06:59.240 END TEST accel_negative_buffers 00:06:59.240 ************************************ 00:06:59.240 Error: writing output failed: Broken pipe 00:06:59.240 10:37:15 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:59.240 10:37:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:59.240 10:37:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.240 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:59.240 ************************************ 00:06:59.240 START TEST accel_crc32c 00:06:59.240 ************************************ 00:06:59.240 10:37:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:59.240 10:37:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.240 10:37:15 -- accel/accel.sh@17 -- # local accel_module 00:06:59.240 10:37:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:59.240 10:37:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:59.240 10:37:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.240 10:37:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.240 10:37:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.240 10:37:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.240 10:37:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.240 10:37:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.240 10:37:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.240 10:37:15 -- accel/accel.sh@42 -- # jq -r . 00:06:59.240 [2024-07-10 10:37:15.893660] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:59.240 [2024-07-10 10:37:15.893723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344073 ] 00:06:59.240 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.240 [2024-07-10 10:37:15.956955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.499 [2024-07-10 10:37:16.048610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.873 10:37:17 -- accel/accel.sh@18 -- # out=' 00:07:00.873 SPDK Configuration: 00:07:00.873 Core mask: 0x1 00:07:00.873 00:07:00.873 Accel Perf Configuration: 00:07:00.873 Workload Type: crc32c 00:07:00.873 CRC-32C seed: 32 00:07:00.873 Transfer size: 4096 bytes 00:07:00.873 Vector count 1 00:07:00.873 Module: software 00:07:00.873 Queue depth: 32 00:07:00.873 Allocate depth: 32 00:07:00.873 # threads/core: 1 00:07:00.873 Run time: 1 seconds 00:07:00.873 Verify: Yes 00:07:00.873 00:07:00.873 Running for 1 seconds... 00:07:00.873 00:07:00.873 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.873 ------------------------------------------------------------------------------------ 00:07:00.873 0,0 401152/s 1567 MiB/s 0 0 00:07:00.873 ==================================================================================== 00:07:00.873 Total 401152/s 1567 MiB/s 0 0' 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.873 10:37:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.873 10:37:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:00.873 10:37:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.873 10:37:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.873 10:37:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.873 10:37:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.873 10:37:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.873 10:37:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.873 10:37:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.873 10:37:17 -- accel/accel.sh@42 -- # jq -r . 00:07:00.873 [2024-07-10 10:37:17.295078] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:00.873 [2024-07-10 10:37:17.295146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344228 ] 00:07:00.873 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.873 [2024-07-10 10:37:17.355441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.873 [2024-07-10 10:37:17.443830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.873 10:37:17 -- accel/accel.sh@21 -- # val= 00:07:00.873 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.873 10:37:17 -- accel/accel.sh@21 -- # val= 00:07:00.873 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.873 10:37:17 -- accel/accel.sh@21 -- # val=0x1 00:07:00.873 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.873 10:37:17 -- accel/accel.sh@21 -- # val= 00:07:00.873 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.873 10:37:17 -- accel/accel.sh@21 -- # val= 00:07:00.873 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.873 10:37:17 -- accel/accel.sh@21 -- # val=crc32c 00:07:00.873 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.873 10:37:17 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.873 10:37:17 -- accel/accel.sh@21 -- # val=32 00:07:00.873 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.873 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 10:37:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.874 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 10:37:17 -- accel/accel.sh@21 -- # val= 00:07:00.874 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 10:37:17 -- accel/accel.sh@21 -- # val=software 00:07:00.874 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 10:37:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 10:37:17 -- accel/accel.sh@21 -- # val=32 00:07:00.874 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 10:37:17 -- accel/accel.sh@21 -- # val=32 00:07:00.874 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 10:37:17 -- accel/accel.sh@21 -- # val=1 00:07:00.874 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 10:37:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.874 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 10:37:17 -- accel/accel.sh@21 -- # val=Yes 00:07:00.874 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 10:37:17 -- accel/accel.sh@21 -- # val= 00:07:00.874 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:00.874 10:37:17 -- accel/accel.sh@21 -- # val= 00:07:00.874 10:37:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # IFS=: 00:07:00.874 10:37:17 -- accel/accel.sh@20 -- # read -r var val 00:07:02.246 10:37:18 -- accel/accel.sh@21 -- # val= 00:07:02.246 10:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.246 10:37:18 -- accel/accel.sh@21 -- # val= 00:07:02.246 10:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.246 10:37:18 -- accel/accel.sh@21 -- # val= 00:07:02.246 10:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.246 10:37:18 -- accel/accel.sh@21 -- # val= 00:07:02.246 10:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.246 10:37:18 -- accel/accel.sh@21 -- # val= 00:07:02.246 10:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.246 10:37:18 -- accel/accel.sh@21 -- # val= 00:07:02.246 10:37:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.246 10:37:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.246 10:37:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.246 10:37:18 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:02.246 10:37:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.246 00:07:02.246 real 0m2.791s 00:07:02.246 user 0m2.501s 00:07:02.246 sys 0m0.284s 00:07:02.246 10:37:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.246 10:37:18 -- common/autotest_common.sh@10 -- # set +x 00:07:02.246 ************************************ 00:07:02.246 END TEST accel_crc32c 00:07:02.246 ************************************ 00:07:02.246 10:37:18 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:02.246 10:37:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:02.246 10:37:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.246 10:37:18 -- common/autotest_common.sh@10 -- # set +x 00:07:02.246 ************************************ 00:07:02.246 START TEST accel_crc32c_C2 00:07:02.246 ************************************ 00:07:02.246 10:37:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:02.246 10:37:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.246 10:37:18 -- accel/accel.sh@17 -- # local accel_module 00:07:02.246 10:37:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:02.246 10:37:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:02.246 10:37:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.246 10:37:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.246 10:37:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.246 10:37:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.246 10:37:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.246 10:37:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.246 10:37:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.246 10:37:18 -- accel/accel.sh@42 -- # jq -r . 00:07:02.246 [2024-07-10 10:37:18.706092] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:02.246 [2024-07-10 10:37:18.706169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344498 ] 00:07:02.246 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.246 [2024-07-10 10:37:18.766060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.246 [2024-07-10 10:37:18.856694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.618 10:37:20 -- accel/accel.sh@18 -- # out=' 00:07:03.618 SPDK Configuration: 00:07:03.618 Core mask: 0x1 00:07:03.618 00:07:03.618 Accel Perf Configuration: 00:07:03.618 Workload Type: crc32c 00:07:03.618 CRC-32C seed: 0 00:07:03.618 Transfer size: 4096 bytes 00:07:03.618 Vector count 2 00:07:03.618 Module: software 00:07:03.618 Queue depth: 32 00:07:03.618 Allocate depth: 32 00:07:03.618 # threads/core: 1 00:07:03.618 Run time: 1 seconds 00:07:03.618 Verify: Yes 00:07:03.618 00:07:03.618 Running for 1 seconds... 00:07:03.618 00:07:03.618 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.618 ------------------------------------------------------------------------------------ 00:07:03.618 0,0 313792/s 2451 MiB/s 0 0 00:07:03.618 ==================================================================================== 00:07:03.618 Total 313792/s 1225 MiB/s 0 0' 00:07:03.618 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.618 10:37:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:03.618 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.618 10:37:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:03.618 10:37:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.618 10:37:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.618 10:37:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.618 10:37:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.619 10:37:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.619 10:37:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.619 10:37:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.619 10:37:20 -- accel/accel.sh@42 -- # jq -r . 00:07:03.619 [2024-07-10 10:37:20.105130] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:03.619 [2024-07-10 10:37:20.105198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344639 ] 00:07:03.619 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.619 [2024-07-10 10:37:20.166876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.619 [2024-07-10 10:37:20.255992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val= 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val= 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val=0x1 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val= 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val= 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val=crc32c 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val=0 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val= 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val=software 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val=32 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val=32 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val=1 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val=Yes 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val= 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 10:37:20 -- accel/accel.sh@21 -- # val= 00:07:03.619 10:37:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 10:37:20 -- accel/accel.sh@20 -- # read -r var val 00:07:04.992 10:37:21 -- accel/accel.sh@21 -- # val= 00:07:04.992 10:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:04.992 10:37:21 -- accel/accel.sh@21 -- # val= 00:07:04.992 10:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:04.992 10:37:21 -- accel/accel.sh@21 -- # val= 00:07:04.992 10:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:04.992 10:37:21 -- accel/accel.sh@21 -- # val= 00:07:04.992 10:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:04.992 10:37:21 -- accel/accel.sh@21 -- # val= 00:07:04.992 10:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:04.992 10:37:21 -- accel/accel.sh@21 -- # val= 00:07:04.992 10:37:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # IFS=: 00:07:04.992 10:37:21 -- accel/accel.sh@20 -- # read -r var val 00:07:04.992 10:37:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.992 10:37:21 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:04.992 10:37:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.992 00:07:04.992 real 0m2.794s 00:07:04.992 user 0m2.504s 00:07:04.992 sys 0m0.283s 00:07:04.992 10:37:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.992 10:37:21 -- common/autotest_common.sh@10 -- # set +x 00:07:04.992 ************************************ 00:07:04.992 END TEST accel_crc32c_C2 00:07:04.992 ************************************ 00:07:04.992 10:37:21 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:04.992 10:37:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:04.992 10:37:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.992 10:37:21 -- common/autotest_common.sh@10 -- # set +x 00:07:04.992 ************************************ 00:07:04.992 START TEST accel_copy 00:07:04.992 ************************************ 00:07:04.992 10:37:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:04.992 10:37:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.992 10:37:21 -- accel/accel.sh@17 -- # local accel_module 00:07:04.992 10:37:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:04.992 10:37:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:04.992 10:37:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.992 10:37:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.992 10:37:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.992 10:37:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.992 10:37:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.992 10:37:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.992 10:37:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.992 10:37:21 -- accel/accel.sh@42 -- # jq -r . 00:07:04.992 [2024-07-10 10:37:21.524566] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:04.992 [2024-07-10 10:37:21.524641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344803 ] 00:07:04.992 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.992 [2024-07-10 10:37:21.585806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.992 [2024-07-10 10:37:21.676038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.369 10:37:22 -- accel/accel.sh@18 -- # out=' 00:07:06.369 SPDK Configuration: 00:07:06.369 Core mask: 0x1 00:07:06.369 00:07:06.369 Accel Perf Configuration: 00:07:06.369 Workload Type: copy 00:07:06.369 Transfer size: 4096 bytes 00:07:06.369 Vector count 1 00:07:06.369 Module: software 00:07:06.369 Queue depth: 32 00:07:06.369 Allocate depth: 32 00:07:06.369 # threads/core: 1 00:07:06.369 Run time: 1 seconds 00:07:06.369 Verify: Yes 00:07:06.369 00:07:06.369 Running for 1 seconds... 00:07:06.369 00:07:06.369 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.369 ------------------------------------------------------------------------------------ 00:07:06.369 0,0 275680/s 1076 MiB/s 0 0 00:07:06.369 ==================================================================================== 00:07:06.369 Total 275680/s 1076 MiB/s 0 0' 00:07:06.369 10:37:22 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:06.369 10:37:22 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:06.369 10:37:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.369 10:37:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.369 10:37:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.369 10:37:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.369 10:37:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.369 10:37:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.369 10:37:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.369 10:37:22 -- accel/accel.sh@42 -- # jq -r . 00:07:06.369 [2024-07-10 10:37:22.926276] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:06.369 [2024-07-10 10:37:22.926347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344943 ] 00:07:06.369 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.369 [2024-07-10 10:37:22.986662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.369 [2024-07-10 10:37:23.075707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val= 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val= 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val=0x1 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val= 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val= 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val=copy 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val= 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val=software 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val=32 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val=32 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val=1 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val=Yes 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val= 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:06.369 10:37:23 -- accel/accel.sh@21 -- # val= 00:07:06.369 10:37:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # IFS=: 00:07:06.369 10:37:23 -- accel/accel.sh@20 -- # read -r var val 00:07:07.746 10:37:24 -- accel/accel.sh@21 -- # val= 00:07:07.746 10:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:07.746 10:37:24 -- accel/accel.sh@21 -- # val= 00:07:07.746 10:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:07.746 10:37:24 -- accel/accel.sh@21 -- # val= 00:07:07.746 10:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:07.746 10:37:24 -- accel/accel.sh@21 -- # val= 00:07:07.746 10:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:07.746 10:37:24 -- accel/accel.sh@21 -- # val= 00:07:07.746 10:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:07.746 10:37:24 -- accel/accel.sh@21 -- # val= 00:07:07.746 10:37:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # IFS=: 00:07:07.746 10:37:24 -- accel/accel.sh@20 -- # read -r var val 00:07:07.746 10:37:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.746 10:37:24 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:07.746 10:37:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.746 00:07:07.746 real 0m2.788s 00:07:07.746 user 0m2.494s 00:07:07.746 sys 0m0.285s 00:07:07.746 10:37:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.746 10:37:24 -- common/autotest_common.sh@10 -- # set +x 00:07:07.746 ************************************ 00:07:07.746 END TEST accel_copy 00:07:07.746 ************************************ 00:07:07.746 10:37:24 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.746 10:37:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:07.746 10:37:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.746 10:37:24 -- common/autotest_common.sh@10 -- # set +x 00:07:07.746 ************************************ 00:07:07.746 START TEST accel_fill 00:07:07.746 ************************************ 00:07:07.746 10:37:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.746 10:37:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.746 10:37:24 -- accel/accel.sh@17 -- # local accel_module 00:07:07.746 10:37:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.746 10:37:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.746 10:37:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.747 10:37:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.747 10:37:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.747 10:37:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.747 10:37:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.747 10:37:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.747 10:37:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.747 10:37:24 -- accel/accel.sh@42 -- # jq -r . 00:07:07.747 [2024-07-10 10:37:24.335087] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:07.747 [2024-07-10 10:37:24.335165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345225 ] 00:07:07.747 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.747 [2024-07-10 10:37:24.395934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.747 [2024-07-10 10:37:24.486345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.118 10:37:25 -- accel/accel.sh@18 -- # out=' 00:07:09.118 SPDK Configuration: 00:07:09.118 Core mask: 0x1 00:07:09.118 00:07:09.118 Accel Perf Configuration: 00:07:09.118 Workload Type: fill 00:07:09.118 Fill pattern: 0x80 00:07:09.118 Transfer size: 4096 bytes 00:07:09.118 Vector count 1 00:07:09.118 Module: software 00:07:09.118 Queue depth: 64 00:07:09.118 Allocate depth: 64 00:07:09.118 # threads/core: 1 00:07:09.118 Run time: 1 seconds 00:07:09.118 Verify: Yes 00:07:09.118 00:07:09.119 Running for 1 seconds... 00:07:09.119 00:07:09.119 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.119 ------------------------------------------------------------------------------------ 00:07:09.119 0,0 402880/s 1573 MiB/s 0 0 00:07:09.119 ==================================================================================== 00:07:09.119 Total 402880/s 1573 MiB/s 0 0' 00:07:09.119 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.119 10:37:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.119 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.119 10:37:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.119 10:37:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.119 10:37:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.119 10:37:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.119 10:37:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.119 10:37:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.119 10:37:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.119 10:37:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.119 10:37:25 -- accel/accel.sh@42 -- # jq -r . 00:07:09.119 [2024-07-10 10:37:25.738258] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:09.119 [2024-07-10 10:37:25.738332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345364 ] 00:07:09.119 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.119 [2024-07-10 10:37:25.797840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.119 [2024-07-10 10:37:25.888722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val= 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val= 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val=0x1 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val= 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val= 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val=fill 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val=0x80 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val= 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val=software 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val=64 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val=64 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val=1 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val=Yes 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val= 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.376 10:37:25 -- accel/accel.sh@21 -- # val= 00:07:09.376 10:37:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.376 10:37:25 -- accel/accel.sh@20 -- # read -r var val 00:07:10.308 10:37:27 -- accel/accel.sh@21 -- # val= 00:07:10.308 10:37:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # IFS=: 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # read -r var val 00:07:10.308 10:37:27 -- accel/accel.sh@21 -- # val= 00:07:10.308 10:37:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # IFS=: 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # read -r var val 00:07:10.308 10:37:27 -- accel/accel.sh@21 -- # val= 00:07:10.308 10:37:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # IFS=: 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # read -r var val 00:07:10.308 10:37:27 -- accel/accel.sh@21 -- # val= 00:07:10.308 10:37:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # IFS=: 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # read -r var val 00:07:10.308 10:37:27 -- accel/accel.sh@21 -- # val= 00:07:10.308 10:37:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # IFS=: 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # read -r var val 00:07:10.308 10:37:27 -- accel/accel.sh@21 -- # val= 00:07:10.308 10:37:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # IFS=: 00:07:10.308 10:37:27 -- accel/accel.sh@20 -- # read -r var val 00:07:10.308 10:37:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.308 10:37:27 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:10.308 10:37:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.308 00:07:10.308 real 0m2.804s 00:07:10.308 user 0m2.506s 00:07:10.308 sys 0m0.291s 00:07:10.308 10:37:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.308 10:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:10.308 ************************************ 00:07:10.308 END TEST accel_fill 00:07:10.308 ************************************ 00:07:10.566 10:37:27 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:10.566 10:37:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:10.566 10:37:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.566 10:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:10.566 ************************************ 00:07:10.566 START TEST accel_copy_crc32c 00:07:10.566 ************************************ 00:07:10.566 10:37:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:10.566 10:37:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.566 10:37:27 -- accel/accel.sh@17 -- # local accel_module 00:07:10.566 10:37:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:10.566 10:37:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:10.566 10:37:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.566 10:37:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.566 10:37:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.566 10:37:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.566 10:37:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.566 10:37:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.566 10:37:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.566 10:37:27 -- accel/accel.sh@42 -- # jq -r . 00:07:10.566 [2024-07-10 10:37:27.165651] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:10.566 [2024-07-10 10:37:27.165744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345521 ] 00:07:10.566 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.566 [2024-07-10 10:37:27.228786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.566 [2024-07-10 10:37:27.319439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.938 10:37:28 -- accel/accel.sh@18 -- # out=' 00:07:11.938 SPDK Configuration: 00:07:11.938 Core mask: 0x1 00:07:11.938 00:07:11.938 Accel Perf Configuration: 00:07:11.938 Workload Type: copy_crc32c 00:07:11.938 CRC-32C seed: 0 00:07:11.938 Vector size: 4096 bytes 00:07:11.938 Transfer size: 4096 bytes 00:07:11.938 Vector count 1 00:07:11.938 Module: software 00:07:11.938 Queue depth: 32 00:07:11.938 Allocate depth: 32 00:07:11.938 # threads/core: 1 00:07:11.938 Run time: 1 seconds 00:07:11.938 Verify: Yes 00:07:11.938 00:07:11.938 Running for 1 seconds... 00:07:11.938 00:07:11.938 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.938 ------------------------------------------------------------------------------------ 00:07:11.938 0,0 216096/s 844 MiB/s 0 0 00:07:11.938 ==================================================================================== 00:07:11.938 Total 216096/s 844 MiB/s 0 0' 00:07:11.938 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:11.938 10:37:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:11.938 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:11.938 10:37:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:11.938 10:37:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.938 10:37:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.938 10:37:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.938 10:37:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.938 10:37:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.938 10:37:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.938 10:37:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.938 10:37:28 -- accel/accel.sh@42 -- # jq -r . 00:07:11.938 [2024-07-10 10:37:28.568559] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:11.938 [2024-07-10 10:37:28.568639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345668 ] 00:07:11.938 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.938 [2024-07-10 10:37:28.628038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.938 [2024-07-10 10:37:28.717384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val= 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val= 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val=0x1 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val= 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val= 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val=0 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val= 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val=software 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.196 10:37:28 -- accel/accel.sh@21 -- # val=32 00:07:12.196 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.196 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.197 10:37:28 -- accel/accel.sh@21 -- # val=32 00:07:12.197 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.197 10:37:28 -- accel/accel.sh@21 -- # val=1 00:07:12.197 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.197 10:37:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.197 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.197 10:37:28 -- accel/accel.sh@21 -- # val=Yes 00:07:12.197 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.197 10:37:28 -- accel/accel.sh@21 -- # val= 00:07:12.197 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.197 10:37:28 -- accel/accel.sh@21 -- # val= 00:07:12.197 10:37:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.197 10:37:28 -- accel/accel.sh@20 -- # read -r var val 00:07:13.130 10:37:29 -- accel/accel.sh@21 -- # val= 00:07:13.130 10:37:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.130 10:37:29 -- accel/accel.sh@21 -- # val= 00:07:13.130 10:37:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.130 10:37:29 -- accel/accel.sh@21 -- # val= 00:07:13.130 10:37:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.130 10:37:29 -- accel/accel.sh@21 -- # val= 00:07:13.130 10:37:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.130 10:37:29 -- accel/accel.sh@21 -- # val= 00:07:13.130 10:37:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.130 10:37:29 -- accel/accel.sh@21 -- # val= 00:07:13.130 10:37:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.130 10:37:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.130 10:37:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.130 10:37:29 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:13.130 10:37:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.130 00:07:13.130 real 0m2.790s 00:07:13.130 user 0m2.494s 00:07:13.130 sys 0m0.289s 00:07:13.130 10:37:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.130 10:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.130 ************************************ 00:07:13.130 END TEST accel_copy_crc32c 00:07:13.130 ************************************ 00:07:13.388 10:37:29 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:13.388 10:37:29 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:13.388 10:37:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.388 10:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:13.388 ************************************ 00:07:13.388 START TEST accel_copy_crc32c_C2 00:07:13.388 ************************************ 00:07:13.388 10:37:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:13.388 10:37:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.388 10:37:29 -- accel/accel.sh@17 -- # local accel_module 00:07:13.388 10:37:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:13.388 10:37:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:13.388 10:37:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.388 10:37:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.388 10:37:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.388 10:37:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.388 10:37:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.388 10:37:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.388 10:37:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.388 10:37:29 -- accel/accel.sh@42 -- # jq -r . 00:07:13.388 [2024-07-10 10:37:29.973848] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:13.389 [2024-07-10 10:37:29.973947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345947 ] 00:07:13.389 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.389 [2024-07-10 10:37:30.038142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.389 [2024-07-10 10:37:30.127180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.763 10:37:31 -- accel/accel.sh@18 -- # out=' 00:07:14.763 SPDK Configuration: 00:07:14.763 Core mask: 0x1 00:07:14.763 00:07:14.763 Accel Perf Configuration: 00:07:14.763 Workload Type: copy_crc32c 00:07:14.763 CRC-32C seed: 0 00:07:14.763 Vector size: 4096 bytes 00:07:14.763 Transfer size: 8192 bytes 00:07:14.763 Vector count 2 00:07:14.763 Module: software 00:07:14.763 Queue depth: 32 00:07:14.763 Allocate depth: 32 00:07:14.763 # threads/core: 1 00:07:14.763 Run time: 1 seconds 00:07:14.763 Verify: Yes 00:07:14.763 00:07:14.763 Running for 1 seconds... 00:07:14.763 00:07:14.763 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.763 ------------------------------------------------------------------------------------ 00:07:14.763 0,0 154272/s 1205 MiB/s 0 0 00:07:14.763 ==================================================================================== 00:07:14.763 Total 154272/s 602 MiB/s 0 0' 00:07:14.763 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:14.763 10:37:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:14.763 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:14.763 10:37:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:14.763 10:37:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.763 10:37:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.763 10:37:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.763 10:37:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.763 10:37:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.763 10:37:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.763 10:37:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.763 10:37:31 -- accel/accel.sh@42 -- # jq -r . 00:07:14.763 [2024-07-10 10:37:31.376845] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:14.763 [2024-07-10 10:37:31.376914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346092 ] 00:07:14.763 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.763 [2024-07-10 10:37:31.436585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.763 [2024-07-10 10:37:31.526598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.021 10:37:31 -- accel/accel.sh@21 -- # val= 00:07:15.021 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.021 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val= 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val=0x1 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val= 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val= 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val=0 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val= 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val=software 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val=32 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val=32 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val=1 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val=Yes 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val= 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.022 10:37:31 -- accel/accel.sh@21 -- # val= 00:07:15.022 10:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.022 10:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.956 10:37:32 -- accel/accel.sh@21 -- # val= 00:07:15.956 10:37:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # IFS=: 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # read -r var val 00:07:15.956 10:37:32 -- accel/accel.sh@21 -- # val= 00:07:15.956 10:37:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # IFS=: 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # read -r var val 00:07:15.956 10:37:32 -- accel/accel.sh@21 -- # val= 00:07:15.956 10:37:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # IFS=: 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # read -r var val 00:07:15.956 10:37:32 -- accel/accel.sh@21 -- # val= 00:07:15.956 10:37:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # IFS=: 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # read -r var val 00:07:15.956 10:37:32 -- accel/accel.sh@21 -- # val= 00:07:15.956 10:37:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # IFS=: 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # read -r var val 00:07:15.956 10:37:32 -- accel/accel.sh@21 -- # val= 00:07:15.956 10:37:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # IFS=: 00:07:15.956 10:37:32 -- accel/accel.sh@20 -- # read -r var val 00:07:15.956 10:37:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.956 10:37:32 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:15.956 10:37:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.956 00:07:15.956 real 0m2.802s 00:07:15.956 user 0m2.505s 00:07:15.956 sys 0m0.290s 00:07:15.956 10:37:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.956 10:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:15.956 ************************************ 00:07:15.956 END TEST accel_copy_crc32c_C2 00:07:15.956 ************************************ 00:07:16.214 10:37:32 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:16.214 10:37:32 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:16.214 10:37:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.214 10:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:16.214 ************************************ 00:07:16.214 START TEST accel_dualcast 00:07:16.214 ************************************ 00:07:16.214 10:37:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:16.214 10:37:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.214 10:37:32 -- accel/accel.sh@17 -- # local accel_module 00:07:16.214 10:37:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:16.214 10:37:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:16.214 10:37:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.214 10:37:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.214 10:37:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.214 10:37:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.214 10:37:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.214 10:37:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.214 10:37:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.214 10:37:32 -- accel/accel.sh@42 -- # jq -r . 00:07:16.214 [2024-07-10 10:37:32.803339] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:16.214 [2024-07-10 10:37:32.803416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346247 ] 00:07:16.214 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.214 [2024-07-10 10:37:32.864759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.214 [2024-07-10 10:37:32.953049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.586 10:37:34 -- accel/accel.sh@18 -- # out=' 00:07:17.586 SPDK Configuration: 00:07:17.586 Core mask: 0x1 00:07:17.586 00:07:17.586 Accel Perf Configuration: 00:07:17.586 Workload Type: dualcast 00:07:17.586 Transfer size: 4096 bytes 00:07:17.586 Vector count 1 00:07:17.586 Module: software 00:07:17.586 Queue depth: 32 00:07:17.586 Allocate depth: 32 00:07:17.586 # threads/core: 1 00:07:17.586 Run time: 1 seconds 00:07:17.586 Verify: Yes 00:07:17.586 00:07:17.586 Running for 1 seconds... 00:07:17.586 00:07:17.586 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.586 ------------------------------------------------------------------------------------ 00:07:17.586 0,0 296576/s 1158 MiB/s 0 0 00:07:17.586 ==================================================================================== 00:07:17.586 Total 296576/s 1158 MiB/s 0 0' 00:07:17.586 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.586 10:37:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:17.586 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.586 10:37:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:17.586 10:37:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.586 10:37:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.586 10:37:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.586 10:37:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.586 10:37:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.586 10:37:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.586 10:37:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.586 10:37:34 -- accel/accel.sh@42 -- # jq -r . 00:07:17.586 [2024-07-10 10:37:34.200660] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:17.586 [2024-07-10 10:37:34.200743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346390 ] 00:07:17.586 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.586 [2024-07-10 10:37:34.261509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.586 [2024-07-10 10:37:34.354143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val= 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val= 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val=0x1 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val= 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val= 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val=dualcast 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val= 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val=software 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val=32 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val=32 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val=1 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val=Yes 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val= 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:17.844 10:37:34 -- accel/accel.sh@21 -- # val= 00:07:17.844 10:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:17.844 10:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:18.778 10:37:35 -- accel/accel.sh@21 -- # val= 00:07:18.778 10:37:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # IFS=: 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # read -r var val 00:07:18.778 10:37:35 -- accel/accel.sh@21 -- # val= 00:07:18.778 10:37:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # IFS=: 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # read -r var val 00:07:18.778 10:37:35 -- accel/accel.sh@21 -- # val= 00:07:18.778 10:37:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # IFS=: 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # read -r var val 00:07:18.778 10:37:35 -- accel/accel.sh@21 -- # val= 00:07:18.778 10:37:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # IFS=: 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # read -r var val 00:07:18.778 10:37:35 -- accel/accel.sh@21 -- # val= 00:07:18.778 10:37:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # IFS=: 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # read -r var val 00:07:18.778 10:37:35 -- accel/accel.sh@21 -- # val= 00:07:18.778 10:37:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # IFS=: 00:07:18.778 10:37:35 -- accel/accel.sh@20 -- # read -r var val 00:07:18.778 10:37:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.778 10:37:35 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:18.778 10:37:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.778 00:07:18.778 real 0m2.794s 00:07:18.778 user 0m2.513s 00:07:18.778 sys 0m0.271s 00:07:18.778 10:37:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.778 10:37:35 -- common/autotest_common.sh@10 -- # set +x 00:07:18.778 ************************************ 00:07:18.778 END TEST accel_dualcast 00:07:18.778 ************************************ 00:07:18.778 10:37:35 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:18.778 10:37:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:18.778 10:37:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.778 10:37:35 -- common/autotest_common.sh@10 -- # set +x 00:07:19.036 ************************************ 00:07:19.036 START TEST accel_compare 00:07:19.036 ************************************ 00:07:19.036 10:37:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:19.036 10:37:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.036 10:37:35 -- accel/accel.sh@17 -- # local accel_module 00:07:19.036 10:37:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:19.036 10:37:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:19.036 10:37:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.036 10:37:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.036 10:37:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.036 10:37:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.036 10:37:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.036 10:37:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.036 10:37:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.036 10:37:35 -- accel/accel.sh@42 -- # jq -r . 00:07:19.036 [2024-07-10 10:37:35.618869] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:19.036 [2024-07-10 10:37:35.618949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346674 ] 00:07:19.036 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.036 [2024-07-10 10:37:35.680784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.036 [2024-07-10 10:37:35.769966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.408 10:37:37 -- accel/accel.sh@18 -- # out=' 00:07:20.408 SPDK Configuration: 00:07:20.408 Core mask: 0x1 00:07:20.409 00:07:20.409 Accel Perf Configuration: 00:07:20.409 Workload Type: compare 00:07:20.409 Transfer size: 4096 bytes 00:07:20.409 Vector count 1 00:07:20.409 Module: software 00:07:20.409 Queue depth: 32 00:07:20.409 Allocate depth: 32 00:07:20.409 # threads/core: 1 00:07:20.409 Run time: 1 seconds 00:07:20.409 Verify: Yes 00:07:20.409 00:07:20.409 Running for 1 seconds... 00:07:20.409 00:07:20.409 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.409 ------------------------------------------------------------------------------------ 00:07:20.409 0,0 402592/s 1572 MiB/s 0 0 00:07:20.409 ==================================================================================== 00:07:20.409 Total 402592/s 1572 MiB/s 0 0' 00:07:20.409 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.409 10:37:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:20.409 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.409 10:37:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:20.409 10:37:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.409 10:37:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.409 10:37:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.409 10:37:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.409 10:37:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.409 10:37:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.409 10:37:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.409 10:37:37 -- accel/accel.sh@42 -- # jq -r . 00:07:20.409 [2024-07-10 10:37:37.018858] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:20.409 [2024-07-10 10:37:37.018937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346813 ] 00:07:20.409 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.409 [2024-07-10 10:37:37.079574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.409 [2024-07-10 10:37:37.169903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.409 10:37:37 -- accel/accel.sh@21 -- # val= 00:07:20.409 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.409 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.409 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.409 10:37:37 -- accel/accel.sh@21 -- # val= 00:07:20.666 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.666 10:37:37 -- accel/accel.sh@21 -- # val=0x1 00:07:20.666 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.666 10:37:37 -- accel/accel.sh@21 -- # val= 00:07:20.666 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.666 10:37:37 -- accel/accel.sh@21 -- # val= 00:07:20.666 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.666 10:37:37 -- accel/accel.sh@21 -- # val=compare 00:07:20.666 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.666 10:37:37 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.666 10:37:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.666 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.666 10:37:37 -- accel/accel.sh@21 -- # val= 00:07:20.666 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.666 10:37:37 -- accel/accel.sh@21 -- # val=software 00:07:20.666 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.666 10:37:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.666 10:37:37 -- accel/accel.sh@21 -- # val=32 00:07:20.666 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.666 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.667 10:37:37 -- accel/accel.sh@21 -- # val=32 00:07:20.667 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.667 10:37:37 -- accel/accel.sh@21 -- # val=1 00:07:20.667 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.667 10:37:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.667 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.667 10:37:37 -- accel/accel.sh@21 -- # val=Yes 00:07:20.667 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.667 10:37:37 -- accel/accel.sh@21 -- # val= 00:07:20.667 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:20.667 10:37:37 -- accel/accel.sh@21 -- # val= 00:07:20.667 10:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:20.667 10:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:21.600 10:37:38 -- accel/accel.sh@21 -- # val= 00:07:21.601 10:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.601 10:37:38 -- accel/accel.sh@21 -- # val= 00:07:21.601 10:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.601 10:37:38 -- accel/accel.sh@21 -- # val= 00:07:21.601 10:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.601 10:37:38 -- accel/accel.sh@21 -- # val= 00:07:21.601 10:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.601 10:37:38 -- accel/accel.sh@21 -- # val= 00:07:21.601 10:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.601 10:37:38 -- accel/accel.sh@21 -- # val= 00:07:21.601 10:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:21.601 10:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:21.601 10:37:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.601 10:37:38 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:21.601 10:37:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.601 00:07:21.601 real 0m2.804s 00:07:21.601 user 0m2.513s 00:07:21.601 sys 0m0.283s 00:07:21.601 10:37:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.601 10:37:38 -- common/autotest_common.sh@10 -- # set +x 00:07:21.601 ************************************ 00:07:21.601 END TEST accel_compare 00:07:21.601 ************************************ 00:07:21.859 10:37:38 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:21.859 10:37:38 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:21.859 10:37:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.859 10:37:38 -- common/autotest_common.sh@10 -- # set +x 00:07:21.859 ************************************ 00:07:21.859 START TEST accel_xor 00:07:21.859 ************************************ 00:07:21.859 10:37:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:21.859 10:37:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.859 10:37:38 -- accel/accel.sh@17 -- # local accel_module 00:07:21.859 10:37:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:21.859 10:37:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:21.859 10:37:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.859 10:37:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.859 10:37:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.859 10:37:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.859 10:37:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.859 10:37:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.859 10:37:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.859 10:37:38 -- accel/accel.sh@42 -- # jq -r . 00:07:21.859 [2024-07-10 10:37:38.451646] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:21.859 [2024-07-10 10:37:38.451725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346974 ] 00:07:21.859 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.859 [2024-07-10 10:37:38.514770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.859 [2024-07-10 10:37:38.602678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.234 10:37:39 -- accel/accel.sh@18 -- # out=' 00:07:23.234 SPDK Configuration: 00:07:23.234 Core mask: 0x1 00:07:23.234 00:07:23.234 Accel Perf Configuration: 00:07:23.234 Workload Type: xor 00:07:23.234 Source buffers: 2 00:07:23.234 Transfer size: 4096 bytes 00:07:23.234 Vector count 1 00:07:23.234 Module: software 00:07:23.234 Queue depth: 32 00:07:23.234 Allocate depth: 32 00:07:23.234 # threads/core: 1 00:07:23.234 Run time: 1 seconds 00:07:23.234 Verify: Yes 00:07:23.234 00:07:23.234 Running for 1 seconds... 00:07:23.234 00:07:23.234 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.234 ------------------------------------------------------------------------------------ 00:07:23.234 0,0 191552/s 748 MiB/s 0 0 00:07:23.234 ==================================================================================== 00:07:23.234 Total 191552/s 748 MiB/s 0 0' 00:07:23.234 10:37:39 -- accel/accel.sh@20 -- # IFS=: 00:07:23.234 10:37:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:23.234 10:37:39 -- accel/accel.sh@20 -- # read -r var val 00:07:23.234 10:37:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:23.234 10:37:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.234 10:37:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.234 10:37:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.234 10:37:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.234 10:37:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.234 10:37:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.234 10:37:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.234 10:37:39 -- accel/accel.sh@42 -- # jq -r . 00:07:23.234 [2024-07-10 10:37:39.845235] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:23.234 [2024-07-10 10:37:39.845313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347117 ] 00:07:23.234 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.234 [2024-07-10 10:37:39.905382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.234 [2024-07-10 10:37:39.997970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val= 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val= 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val=0x1 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val= 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val= 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val=xor 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val=2 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val= 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val=software 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val=32 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val=32 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val=1 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val=Yes 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val= 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:23.493 10:37:40 -- accel/accel.sh@21 -- # val= 00:07:23.493 10:37:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # IFS=: 00:07:23.493 10:37:40 -- accel/accel.sh@20 -- # read -r var val 00:07:24.426 10:37:41 -- accel/accel.sh@21 -- # val= 00:07:24.426 10:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.426 10:37:41 -- accel/accel.sh@21 -- # val= 00:07:24.426 10:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.426 10:37:41 -- accel/accel.sh@21 -- # val= 00:07:24.426 10:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.426 10:37:41 -- accel/accel.sh@21 -- # val= 00:07:24.426 10:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.426 10:37:41 -- accel/accel.sh@21 -- # val= 00:07:24.426 10:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.426 10:37:41 -- accel/accel.sh@21 -- # val= 00:07:24.426 10:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:24.426 10:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.426 10:37:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.426 10:37:41 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:24.426 10:37:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.426 00:07:24.426 real 0m2.790s 00:07:24.426 user 0m2.494s 00:07:24.426 sys 0m0.287s 00:07:24.426 10:37:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.426 10:37:41 -- common/autotest_common.sh@10 -- # set +x 00:07:24.426 ************************************ 00:07:24.426 END TEST accel_xor 00:07:24.426 ************************************ 00:07:24.426 10:37:41 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:24.426 10:37:41 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:24.426 10:37:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.426 10:37:41 -- common/autotest_common.sh@10 -- # set +x 00:07:24.426 ************************************ 00:07:24.426 START TEST accel_xor 00:07:24.426 ************************************ 00:07:24.426 10:37:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:24.426 10:37:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.426 10:37:41 -- accel/accel.sh@17 -- # local accel_module 00:07:24.426 10:37:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:24.426 10:37:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:24.684 10:37:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.684 10:37:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.684 10:37:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.684 10:37:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.684 10:37:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.684 10:37:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.684 10:37:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.684 10:37:41 -- accel/accel.sh@42 -- # jq -r . 00:07:24.684 [2024-07-10 10:37:41.265362] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:24.684 [2024-07-10 10:37:41.265468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347382 ] 00:07:24.684 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.684 [2024-07-10 10:37:41.330816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.684 [2024-07-10 10:37:41.418059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.059 10:37:42 -- accel/accel.sh@18 -- # out=' 00:07:26.059 SPDK Configuration: 00:07:26.059 Core mask: 0x1 00:07:26.059 00:07:26.059 Accel Perf Configuration: 00:07:26.059 Workload Type: xor 00:07:26.059 Source buffers: 3 00:07:26.059 Transfer size: 4096 bytes 00:07:26.059 Vector count 1 00:07:26.059 Module: software 00:07:26.059 Queue depth: 32 00:07:26.059 Allocate depth: 32 00:07:26.059 # threads/core: 1 00:07:26.059 Run time: 1 seconds 00:07:26.059 Verify: Yes 00:07:26.059 00:07:26.059 Running for 1 seconds... 00:07:26.059 00:07:26.059 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.059 ------------------------------------------------------------------------------------ 00:07:26.059 0,0 182656/s 713 MiB/s 0 0 00:07:26.059 ==================================================================================== 00:07:26.059 Total 182656/s 713 MiB/s 0 0' 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:26.059 10:37:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.059 10:37:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.059 10:37:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.059 10:37:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.059 10:37:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.059 10:37:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.059 10:37:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.059 10:37:42 -- accel/accel.sh@42 -- # jq -r . 00:07:26.059 [2024-07-10 10:37:42.656319] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:26.059 [2024-07-10 10:37:42.656399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347540 ] 00:07:26.059 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.059 [2024-07-10 10:37:42.717541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.059 [2024-07-10 10:37:42.808528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val= 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val= 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val=0x1 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val= 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val= 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val=xor 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val=3 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val= 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val=software 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val=32 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val=32 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val=1 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val=Yes 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val= 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.059 10:37:42 -- accel/accel.sh@21 -- # val= 00:07:26.059 10:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.059 10:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:27.430 10:37:44 -- accel/accel.sh@21 -- # val= 00:07:27.430 10:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.430 10:37:44 -- accel/accel.sh@21 -- # val= 00:07:27.430 10:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.430 10:37:44 -- accel/accel.sh@21 -- # val= 00:07:27.430 10:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.430 10:37:44 -- accel/accel.sh@21 -- # val= 00:07:27.430 10:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.430 10:37:44 -- accel/accel.sh@21 -- # val= 00:07:27.430 10:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.430 10:37:44 -- accel/accel.sh@21 -- # val= 00:07:27.430 10:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:27.430 10:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:27.430 10:37:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.430 10:37:44 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:27.430 10:37:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.430 00:07:27.430 real 0m2.799s 00:07:27.430 user 0m2.490s 00:07:27.430 sys 0m0.299s 00:07:27.430 10:37:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.430 10:37:44 -- common/autotest_common.sh@10 -- # set +x 00:07:27.430 ************************************ 00:07:27.430 END TEST accel_xor 00:07:27.430 ************************************ 00:07:27.430 10:37:44 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:27.430 10:37:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:27.430 10:37:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.430 10:37:44 -- common/autotest_common.sh@10 -- # set +x 00:07:27.430 ************************************ 00:07:27.430 START TEST accel_dif_verify 00:07:27.430 ************************************ 00:07:27.430 10:37:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:27.430 10:37:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.430 10:37:44 -- accel/accel.sh@17 -- # local accel_module 00:07:27.430 10:37:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:27.430 10:37:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:27.430 10:37:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.430 10:37:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.430 10:37:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.430 10:37:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.430 10:37:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.430 10:37:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.430 10:37:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.430 10:37:44 -- accel/accel.sh@42 -- # jq -r . 00:07:27.430 [2024-07-10 10:37:44.088644] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:27.430 [2024-07-10 10:37:44.088745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347695 ] 00:07:27.430 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.430 [2024-07-10 10:37:44.148545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.430 [2024-07-10 10:37:44.239514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.799 10:37:45 -- accel/accel.sh@18 -- # out=' 00:07:28.799 SPDK Configuration: 00:07:28.799 Core mask: 0x1 00:07:28.799 00:07:28.799 Accel Perf Configuration: 00:07:28.799 Workload Type: dif_verify 00:07:28.799 Vector size: 4096 bytes 00:07:28.799 Transfer size: 4096 bytes 00:07:28.799 Block size: 512 bytes 00:07:28.799 Metadata size: 8 bytes 00:07:28.799 Vector count 1 00:07:28.799 Module: software 00:07:28.799 Queue depth: 32 00:07:28.799 Allocate depth: 32 00:07:28.799 # threads/core: 1 00:07:28.799 Run time: 1 seconds 00:07:28.799 Verify: No 00:07:28.799 00:07:28.799 Running for 1 seconds... 00:07:28.799 00:07:28.799 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.799 ------------------------------------------------------------------------------------ 00:07:28.799 0,0 81600/s 323 MiB/s 0 0 00:07:28.799 ==================================================================================== 00:07:28.799 Total 81600/s 318 MiB/s 0 0' 00:07:28.799 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:28.799 10:37:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:28.799 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:28.799 10:37:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:28.799 10:37:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.799 10:37:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.799 10:37:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.799 10:37:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.799 10:37:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.799 10:37:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.799 10:37:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.799 10:37:45 -- accel/accel.sh@42 -- # jq -r . 00:07:28.799 [2024-07-10 10:37:45.488590] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:28.799 [2024-07-10 10:37:45.488671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347841 ] 00:07:28.799 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.799 [2024-07-10 10:37:45.552711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.056 [2024-07-10 10:37:45.647771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.056 10:37:45 -- accel/accel.sh@21 -- # val= 00:07:29.056 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.056 10:37:45 -- accel/accel.sh@21 -- # val= 00:07:29.056 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.056 10:37:45 -- accel/accel.sh@21 -- # val=0x1 00:07:29.056 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.056 10:37:45 -- accel/accel.sh@21 -- # val= 00:07:29.056 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.056 10:37:45 -- accel/accel.sh@21 -- # val= 00:07:29.056 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.056 10:37:45 -- accel/accel.sh@21 -- # val=dif_verify 00:07:29.056 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.056 10:37:45 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.056 10:37:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.056 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.056 10:37:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.056 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.056 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.056 10:37:45 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:29.057 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.057 10:37:45 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:29.057 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.057 10:37:45 -- accel/accel.sh@21 -- # val= 00:07:29.057 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.057 10:37:45 -- accel/accel.sh@21 -- # val=software 00:07:29.057 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.057 10:37:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.057 10:37:45 -- accel/accel.sh@21 -- # val=32 00:07:29.057 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.057 10:37:45 -- accel/accel.sh@21 -- # val=32 00:07:29.057 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.057 10:37:45 -- accel/accel.sh@21 -- # val=1 00:07:29.057 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.057 10:37:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.057 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.057 10:37:45 -- accel/accel.sh@21 -- # val=No 00:07:29.057 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.057 10:37:45 -- accel/accel.sh@21 -- # val= 00:07:29.057 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.057 10:37:45 -- accel/accel.sh@21 -- # val= 00:07:29.057 10:37:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.057 10:37:45 -- accel/accel.sh@20 -- # read -r var val 00:07:30.428 10:37:46 -- accel/accel.sh@21 -- # val= 00:07:30.428 10:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.428 10:37:46 -- accel/accel.sh@21 -- # val= 00:07:30.428 10:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.428 10:37:46 -- accel/accel.sh@21 -- # val= 00:07:30.428 10:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.428 10:37:46 -- accel/accel.sh@21 -- # val= 00:07:30.428 10:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.428 10:37:46 -- accel/accel.sh@21 -- # val= 00:07:30.428 10:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.428 10:37:46 -- accel/accel.sh@21 -- # val= 00:07:30.428 10:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.428 10:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.428 10:37:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.428 10:37:46 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:30.428 10:37:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.428 00:07:30.428 real 0m2.812s 00:07:30.428 user 0m2.522s 00:07:30.428 sys 0m0.284s 00:07:30.428 10:37:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.428 10:37:46 -- common/autotest_common.sh@10 -- # set +x 00:07:30.428 ************************************ 00:07:30.428 END TEST accel_dif_verify 00:07:30.428 ************************************ 00:07:30.428 10:37:46 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:30.428 10:37:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:30.428 10:37:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.428 10:37:46 -- common/autotest_common.sh@10 -- # set +x 00:07:30.428 ************************************ 00:07:30.428 START TEST accel_dif_generate 00:07:30.428 ************************************ 00:07:30.428 10:37:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:30.428 10:37:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.429 10:37:46 -- accel/accel.sh@17 -- # local accel_module 00:07:30.429 10:37:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:30.429 10:37:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:30.429 10:37:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.429 10:37:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.429 10:37:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.429 10:37:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.429 10:37:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.429 10:37:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.429 10:37:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.429 10:37:46 -- accel/accel.sh@42 -- # jq -r . 00:07:30.429 [2024-07-10 10:37:46.927690] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:30.429 [2024-07-10 10:37:46.927766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348078 ] 00:07:30.429 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.429 [2024-07-10 10:37:46.989244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.429 [2024-07-10 10:37:47.081514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.801 10:37:48 -- accel/accel.sh@18 -- # out=' 00:07:31.801 SPDK Configuration: 00:07:31.801 Core mask: 0x1 00:07:31.801 00:07:31.801 Accel Perf Configuration: 00:07:31.801 Workload Type: dif_generate 00:07:31.801 Vector size: 4096 bytes 00:07:31.801 Transfer size: 4096 bytes 00:07:31.801 Block size: 512 bytes 00:07:31.801 Metadata size: 8 bytes 00:07:31.801 Vector count 1 00:07:31.801 Module: software 00:07:31.801 Queue depth: 32 00:07:31.801 Allocate depth: 32 00:07:31.801 # threads/core: 1 00:07:31.801 Run time: 1 seconds 00:07:31.801 Verify: No 00:07:31.801 00:07:31.801 Running for 1 seconds... 00:07:31.801 00:07:31.801 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.801 ------------------------------------------------------------------------------------ 00:07:31.801 0,0 95840/s 380 MiB/s 0 0 00:07:31.801 ==================================================================================== 00:07:31.801 Total 95840/s 374 MiB/s 0 0' 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:31.801 10:37:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.801 10:37:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.801 10:37:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.801 10:37:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.801 10:37:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.801 10:37:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.801 10:37:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.801 10:37:48 -- accel/accel.sh@42 -- # jq -r . 00:07:31.801 [2024-07-10 10:37:48.318076] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:31.801 [2024-07-10 10:37:48.318159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348261 ] 00:07:31.801 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.801 [2024-07-10 10:37:48.380735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.801 [2024-07-10 10:37:48.474523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val= 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val= 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val=0x1 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val= 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val= 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val=dif_generate 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val= 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val=software 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val=32 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val=32 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val=1 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val=No 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val= 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:31.801 10:37:48 -- accel/accel.sh@21 -- # val= 00:07:31.801 10:37:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:31.801 10:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:33.174 10:37:49 -- accel/accel.sh@21 -- # val= 00:07:33.174 10:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.174 10:37:49 -- accel/accel.sh@21 -- # val= 00:07:33.174 10:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.174 10:37:49 -- accel/accel.sh@21 -- # val= 00:07:33.174 10:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.174 10:37:49 -- accel/accel.sh@21 -- # val= 00:07:33.174 10:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.174 10:37:49 -- accel/accel.sh@21 -- # val= 00:07:33.174 10:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.174 10:37:49 -- accel/accel.sh@21 -- # val= 00:07:33.174 10:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.174 10:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.174 10:37:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.174 10:37:49 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:33.174 10:37:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.174 00:07:33.174 real 0m2.806s 00:07:33.174 user 0m2.512s 00:07:33.174 sys 0m0.288s 00:07:33.174 10:37:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.174 10:37:49 -- common/autotest_common.sh@10 -- # set +x 00:07:33.174 ************************************ 00:07:33.174 END TEST accel_dif_generate 00:07:33.174 ************************************ 00:07:33.174 10:37:49 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:33.174 10:37:49 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:33.174 10:37:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.174 10:37:49 -- common/autotest_common.sh@10 -- # set +x 00:07:33.174 ************************************ 00:07:33.174 START TEST accel_dif_generate_copy 00:07:33.174 ************************************ 00:07:33.174 10:37:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:33.174 10:37:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.174 10:37:49 -- accel/accel.sh@17 -- # local accel_module 00:07:33.174 10:37:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:33.174 10:37:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:33.174 10:37:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.174 10:37:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.174 10:37:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.174 10:37:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.174 10:37:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.174 10:37:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.174 10:37:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.174 10:37:49 -- accel/accel.sh@42 -- # jq -r . 00:07:33.174 [2024-07-10 10:37:49.759772] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:33.174 [2024-07-10 10:37:49.759859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348422 ] 00:07:33.174 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.174 [2024-07-10 10:37:49.824070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.174 [2024-07-10 10:37:49.918787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.545 10:37:51 -- accel/accel.sh@18 -- # out=' 00:07:34.545 SPDK Configuration: 00:07:34.545 Core mask: 0x1 00:07:34.545 00:07:34.545 Accel Perf Configuration: 00:07:34.545 Workload Type: dif_generate_copy 00:07:34.545 Vector size: 4096 bytes 00:07:34.545 Transfer size: 4096 bytes 00:07:34.545 Vector count 1 00:07:34.545 Module: software 00:07:34.545 Queue depth: 32 00:07:34.545 Allocate depth: 32 00:07:34.545 # threads/core: 1 00:07:34.545 Run time: 1 seconds 00:07:34.545 Verify: No 00:07:34.545 00:07:34.545 Running for 1 seconds... 00:07:34.545 00:07:34.545 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.545 ------------------------------------------------------------------------------------ 00:07:34.545 0,0 75392/s 299 MiB/s 0 0 00:07:34.545 ==================================================================================== 00:07:34.545 Total 75392/s 294 MiB/s 0 0' 00:07:34.545 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.545 10:37:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:34.545 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.545 10:37:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:34.545 10:37:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.545 10:37:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.545 10:37:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.545 10:37:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.545 10:37:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.545 10:37:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.545 10:37:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.545 10:37:51 -- accel/accel.sh@42 -- # jq -r . 00:07:34.545 [2024-07-10 10:37:51.172335] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:34.546 [2024-07-10 10:37:51.172421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348564 ] 00:07:34.546 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.546 [2024-07-10 10:37:51.232822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.546 [2024-07-10 10:37:51.326297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val= 00:07:34.803 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val= 00:07:34.803 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val=0x1 00:07:34.803 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val= 00:07:34.803 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val= 00:07:34.803 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:34.803 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.803 10:37:51 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.803 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.803 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val= 00:07:34.803 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val=software 00:07:34.803 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.803 10:37:51 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val=32 00:07:34.803 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.803 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.803 10:37:51 -- accel/accel.sh@21 -- # val=32 00:07:34.804 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.804 10:37:51 -- accel/accel.sh@21 -- # val=1 00:07:34.804 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.804 10:37:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.804 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.804 10:37:51 -- accel/accel.sh@21 -- # val=No 00:07:34.804 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.804 10:37:51 -- accel/accel.sh@21 -- # val= 00:07:34.804 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:34.804 10:37:51 -- accel/accel.sh@21 -- # val= 00:07:34.804 10:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:34.804 10:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:36.177 10:37:52 -- accel/accel.sh@21 -- # val= 00:07:36.177 10:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.177 10:37:52 -- accel/accel.sh@21 -- # val= 00:07:36.177 10:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.177 10:37:52 -- accel/accel.sh@21 -- # val= 00:07:36.177 10:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.177 10:37:52 -- accel/accel.sh@21 -- # val= 00:07:36.177 10:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.177 10:37:52 -- accel/accel.sh@21 -- # val= 00:07:36.177 10:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.177 10:37:52 -- accel/accel.sh@21 -- # val= 00:07:36.177 10:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.177 10:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.177 10:37:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.177 10:37:52 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:36.177 10:37:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.177 00:07:36.177 real 0m2.824s 00:07:36.177 user 0m2.527s 00:07:36.177 sys 0m0.288s 00:07:36.177 10:37:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.177 10:37:52 -- common/autotest_common.sh@10 -- # set +x 00:07:36.177 ************************************ 00:07:36.177 END TEST accel_dif_generate_copy 00:07:36.177 ************************************ 00:07:36.177 10:37:52 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:36.177 10:37:52 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.177 10:37:52 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:36.177 10:37:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.177 10:37:52 -- common/autotest_common.sh@10 -- # set +x 00:07:36.177 ************************************ 00:07:36.177 START TEST accel_comp 00:07:36.177 ************************************ 00:07:36.177 10:37:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.177 10:37:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.177 10:37:52 -- accel/accel.sh@17 -- # local accel_module 00:07:36.177 10:37:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.177 10:37:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.177 10:37:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.177 10:37:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.177 10:37:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.177 10:37:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.177 10:37:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.177 10:37:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.177 10:37:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.177 10:37:52 -- accel/accel.sh@42 -- # jq -r . 00:07:36.177 [2024-07-10 10:37:52.611026] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:36.177 [2024-07-10 10:37:52.611104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348801 ] 00:07:36.177 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.177 [2024-07-10 10:37:52.673976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.177 [2024-07-10 10:37:52.767580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.549 10:37:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:37.549 00:07:37.549 SPDK Configuration: 00:07:37.549 Core mask: 0x1 00:07:37.549 00:07:37.549 Accel Perf Configuration: 00:07:37.549 Workload Type: compress 00:07:37.550 Transfer size: 4096 bytes 00:07:37.550 Vector count 1 00:07:37.550 Module: software 00:07:37.550 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.550 Queue depth: 32 00:07:37.550 Allocate depth: 32 00:07:37.550 # threads/core: 1 00:07:37.550 Run time: 1 seconds 00:07:37.550 Verify: No 00:07:37.550 00:07:37.550 Running for 1 seconds... 00:07:37.550 00:07:37.550 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.550 ------------------------------------------------------------------------------------ 00:07:37.550 0,0 32320/s 134 MiB/s 0 0 00:07:37.550 ==================================================================================== 00:07:37.550 Total 32320/s 126 MiB/s 0 0' 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.550 10:37:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.550 10:37:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.550 10:37:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.550 10:37:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.550 10:37:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.550 10:37:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.550 10:37:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.550 10:37:54 -- accel/accel.sh@42 -- # jq -r . 00:07:37.550 [2024-07-10 10:37:54.020066] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:37.550 [2024-07-10 10:37:54.020150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348989 ] 00:07:37.550 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.550 [2024-07-10 10:37:54.080136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.550 [2024-07-10 10:37:54.174212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val= 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val= 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val= 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val=0x1 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val= 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val= 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val=compress 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val= 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val=software 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val=32 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val=32 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val=1 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val=No 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val= 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.550 10:37:54 -- accel/accel.sh@21 -- # val= 00:07:37.550 10:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:37.550 10:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:38.924 10:37:55 -- accel/accel.sh@21 -- # val= 00:07:38.924 10:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.924 10:37:55 -- accel/accel.sh@21 -- # val= 00:07:38.924 10:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.924 10:37:55 -- accel/accel.sh@21 -- # val= 00:07:38.924 10:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.924 10:37:55 -- accel/accel.sh@21 -- # val= 00:07:38.924 10:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.924 10:37:55 -- accel/accel.sh@21 -- # val= 00:07:38.924 10:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.924 10:37:55 -- accel/accel.sh@21 -- # val= 00:07:38.924 10:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:38.924 10:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:38.924 10:37:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.924 10:37:55 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:38.924 10:37:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.924 00:07:38.924 real 0m2.826s 00:07:38.924 user 0m2.535s 00:07:38.924 sys 0m0.284s 00:07:38.924 10:37:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.924 10:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:38.924 ************************************ 00:07:38.924 END TEST accel_comp 00:07:38.924 ************************************ 00:07:38.924 10:37:55 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.924 10:37:55 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:38.924 10:37:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.924 10:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:38.924 ************************************ 00:07:38.924 START TEST accel_decomp 00:07:38.924 ************************************ 00:07:38.924 10:37:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.924 10:37:55 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.924 10:37:55 -- accel/accel.sh@17 -- # local accel_module 00:07:38.924 10:37:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.924 10:37:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.924 10:37:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.924 10:37:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.924 10:37:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.924 10:37:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.924 10:37:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.924 10:37:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.924 10:37:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.924 10:37:55 -- accel/accel.sh@42 -- # jq -r . 00:07:38.924 [2024-07-10 10:37:55.463342] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:38.924 [2024-07-10 10:37:55.463431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349143 ] 00:07:38.924 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.924 [2024-07-10 10:37:55.523895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.924 [2024-07-10 10:37:55.617847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.299 10:37:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:40.299 00:07:40.299 SPDK Configuration: 00:07:40.299 Core mask: 0x1 00:07:40.299 00:07:40.299 Accel Perf Configuration: 00:07:40.299 Workload Type: decompress 00:07:40.299 Transfer size: 4096 bytes 00:07:40.299 Vector count 1 00:07:40.299 Module: software 00:07:40.299 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.299 Queue depth: 32 00:07:40.299 Allocate depth: 32 00:07:40.299 # threads/core: 1 00:07:40.299 Run time: 1 seconds 00:07:40.299 Verify: Yes 00:07:40.299 00:07:40.299 Running for 1 seconds... 00:07:40.299 00:07:40.299 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.299 ------------------------------------------------------------------------------------ 00:07:40.299 0,0 54976/s 101 MiB/s 0 0 00:07:40.299 ==================================================================================== 00:07:40.299 Total 54976/s 214 MiB/s 0 0' 00:07:40.299 10:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.299 10:37:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:40.299 10:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.299 10:37:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:40.299 10:37:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.299 10:37:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.299 10:37:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.299 10:37:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.299 10:37:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.299 10:37:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.299 10:37:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.299 10:37:56 -- accel/accel.sh@42 -- # jq -r . 00:07:40.299 [2024-07-10 10:37:56.876681] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:40.299 [2024-07-10 10:37:56.876761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349290 ] 00:07:40.299 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.299 [2024-07-10 10:37:56.941732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.299 [2024-07-10 10:37:57.033878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.299 10:37:57 -- accel/accel.sh@21 -- # val= 00:07:40.299 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.299 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.299 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.299 10:37:57 -- accel/accel.sh@21 -- # val= 00:07:40.299 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val= 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val=0x1 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val= 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val= 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val=decompress 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val= 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val=software 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val=32 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val=32 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val=1 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val=Yes 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val= 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:40.300 10:37:57 -- accel/accel.sh@21 -- # val= 00:07:40.300 10:37:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # IFS=: 00:07:40.300 10:37:57 -- accel/accel.sh@20 -- # read -r var val 00:07:41.735 10:37:58 -- accel/accel.sh@21 -- # val= 00:07:41.735 10:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.735 10:37:58 -- accel/accel.sh@21 -- # val= 00:07:41.735 10:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.735 10:37:58 -- accel/accel.sh@21 -- # val= 00:07:41.735 10:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.735 10:37:58 -- accel/accel.sh@21 -- # val= 00:07:41.735 10:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.735 10:37:58 -- accel/accel.sh@21 -- # val= 00:07:41.735 10:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.735 10:37:58 -- accel/accel.sh@21 -- # val= 00:07:41.735 10:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:41.735 10:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:41.735 10:37:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:41.735 10:37:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:41.735 10:37:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.735 00:07:41.735 real 0m2.827s 00:07:41.735 user 0m2.527s 00:07:41.735 sys 0m0.292s 00:07:41.735 10:37:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.735 10:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:41.735 ************************************ 00:07:41.735 END TEST accel_decomp 00:07:41.735 ************************************ 00:07:41.735 10:37:58 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:41.735 10:37:58 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:41.735 10:37:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.735 10:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:41.735 ************************************ 00:07:41.735 START TEST accel_decmop_full 00:07:41.735 ************************************ 00:07:41.735 10:37:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:41.735 10:37:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.735 10:37:58 -- accel/accel.sh@17 -- # local accel_module 00:07:41.735 10:37:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:41.735 10:37:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:41.735 10:37:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.735 10:37:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.735 10:37:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.735 10:37:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.735 10:37:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.735 10:37:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.735 10:37:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.735 10:37:58 -- accel/accel.sh@42 -- # jq -r . 00:07:41.735 [2024-07-10 10:37:58.316871] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:41.735 [2024-07-10 10:37:58.316959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349540 ] 00:07:41.735 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.735 [2024-07-10 10:37:58.382070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.735 [2024-07-10 10:37:58.476064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.109 10:37:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:43.109 00:07:43.109 SPDK Configuration: 00:07:43.109 Core mask: 0x1 00:07:43.109 00:07:43.109 Accel Perf Configuration: 00:07:43.109 Workload Type: decompress 00:07:43.109 Transfer size: 111250 bytes 00:07:43.109 Vector count 1 00:07:43.109 Module: software 00:07:43.109 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.109 Queue depth: 32 00:07:43.109 Allocate depth: 32 00:07:43.109 # threads/core: 1 00:07:43.109 Run time: 1 seconds 00:07:43.109 Verify: Yes 00:07:43.109 00:07:43.109 Running for 1 seconds... 00:07:43.109 00:07:43.109 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.109 ------------------------------------------------------------------------------------ 00:07:43.109 0,0 3808/s 157 MiB/s 0 0 00:07:43.109 ==================================================================================== 00:07:43.109 Total 3808/s 404 MiB/s 0 0' 00:07:43.109 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.109 10:37:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.109 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.109 10:37:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.109 10:37:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.109 10:37:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.109 10:37:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.109 10:37:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.109 10:37:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.109 10:37:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.109 10:37:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.109 10:37:59 -- accel/accel.sh@42 -- # jq -r . 00:07:43.109 [2024-07-10 10:37:59.741695] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:43.109 [2024-07-10 10:37:59.741780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349721 ] 00:07:43.109 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.109 [2024-07-10 10:37:59.802336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.109 [2024-07-10 10:37:59.895367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val= 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val= 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val= 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val=0x1 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val= 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val= 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val=decompress 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val= 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val=software 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val=32 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val=32 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val=1 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val=Yes 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val= 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.367 10:37:59 -- accel/accel.sh@21 -- # val= 00:07:43.367 10:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.367 10:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:44.739 10:38:01 -- accel/accel.sh@21 -- # val= 00:07:44.739 10:38:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.739 10:38:01 -- accel/accel.sh@21 -- # val= 00:07:44.739 10:38:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.739 10:38:01 -- accel/accel.sh@21 -- # val= 00:07:44.739 10:38:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.739 10:38:01 -- accel/accel.sh@21 -- # val= 00:07:44.739 10:38:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.739 10:38:01 -- accel/accel.sh@21 -- # val= 00:07:44.739 10:38:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.739 10:38:01 -- accel/accel.sh@21 -- # val= 00:07:44.739 10:38:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # IFS=: 00:07:44.739 10:38:01 -- accel/accel.sh@20 -- # read -r var val 00:07:44.739 10:38:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:44.739 10:38:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:44.739 10:38:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.739 00:07:44.739 real 0m2.850s 00:07:44.739 user 0m2.561s 00:07:44.739 sys 0m0.281s 00:07:44.739 10:38:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.739 10:38:01 -- common/autotest_common.sh@10 -- # set +x 00:07:44.739 ************************************ 00:07:44.739 END TEST accel_decmop_full 00:07:44.739 ************************************ 00:07:44.739 10:38:01 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.739 10:38:01 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:44.739 10:38:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.739 10:38:01 -- common/autotest_common.sh@10 -- # set +x 00:07:44.739 ************************************ 00:07:44.739 START TEST accel_decomp_mcore 00:07:44.739 ************************************ 00:07:44.739 10:38:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.739 10:38:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.739 10:38:01 -- accel/accel.sh@17 -- # local accel_module 00:07:44.739 10:38:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.739 10:38:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.739 10:38:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.739 10:38:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.739 10:38:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.739 10:38:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.739 10:38:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.739 10:38:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.739 10:38:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.739 10:38:01 -- accel/accel.sh@42 -- # jq -r . 00:07:44.739 [2024-07-10 10:38:01.193194] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:44.739 [2024-07-10 10:38:01.193274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349875 ] 00:07:44.739 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.739 [2024-07-10 10:38:01.249952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.739 [2024-07-10 10:38:01.346534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.739 [2024-07-10 10:38:01.346589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.739 [2024-07-10 10:38:01.346644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.739 [2024-07-10 10:38:01.346648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.113 10:38:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:46.113 00:07:46.113 SPDK Configuration: 00:07:46.113 Core mask: 0xf 00:07:46.113 00:07:46.113 Accel Perf Configuration: 00:07:46.113 Workload Type: decompress 00:07:46.113 Transfer size: 4096 bytes 00:07:46.113 Vector count 1 00:07:46.113 Module: software 00:07:46.113 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.113 Queue depth: 32 00:07:46.113 Allocate depth: 32 00:07:46.113 # threads/core: 1 00:07:46.113 Run time: 1 seconds 00:07:46.113 Verify: Yes 00:07:46.113 00:07:46.113 Running for 1 seconds... 00:07:46.113 00:07:46.113 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:46.113 ------------------------------------------------------------------------------------ 00:07:46.113 0,0 54208/s 99 MiB/s 0 0 00:07:46.113 3,0 54656/s 100 MiB/s 0 0 00:07:46.113 2,0 54656/s 100 MiB/s 0 0 00:07:46.113 1,0 54656/s 100 MiB/s 0 0 00:07:46.113 ==================================================================================== 00:07:46.113 Total 218176/s 852 MiB/s 0 0' 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:46.113 10:38:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.113 10:38:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.113 10:38:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.113 10:38:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.113 10:38:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.113 10:38:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.113 10:38:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.113 10:38:02 -- accel/accel.sh@42 -- # jq -r . 00:07:46.113 [2024-07-10 10:38:02.607597] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:46.113 [2024-07-10 10:38:02.607676] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350018 ] 00:07:46.113 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.113 [2024-07-10 10:38:02.671581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.113 [2024-07-10 10:38:02.768580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.113 [2024-07-10 10:38:02.768633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.113 [2024-07-10 10:38:02.768685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.113 [2024-07-10 10:38:02.768688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val= 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val= 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val= 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val=0xf 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val= 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val= 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val=decompress 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val= 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val=software 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val=32 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val=32 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.113 10:38:02 -- accel/accel.sh@21 -- # val=1 00:07:46.113 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.113 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.114 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.114 10:38:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:46.114 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.114 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.114 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.114 10:38:02 -- accel/accel.sh@21 -- # val=Yes 00:07:46.114 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.114 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.114 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.114 10:38:02 -- accel/accel.sh@21 -- # val= 00:07:46.114 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.114 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.114 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.114 10:38:02 -- accel/accel.sh@21 -- # val= 00:07:46.114 10:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.114 10:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.114 10:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:47.485 10:38:04 -- accel/accel.sh@21 -- # val= 00:07:47.485 10:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.485 10:38:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.485 10:38:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.485 10:38:04 -- accel/accel.sh@21 -- # val= 00:07:47.485 10:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.485 10:38:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.485 10:38:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.485 10:38:04 -- accel/accel.sh@21 -- # val= 00:07:47.485 10:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.486 10:38:04 -- accel/accel.sh@21 -- # val= 00:07:47.486 10:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.486 10:38:04 -- accel/accel.sh@21 -- # val= 00:07:47.486 10:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.486 10:38:04 -- accel/accel.sh@21 -- # val= 00:07:47.486 10:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.486 10:38:04 -- accel/accel.sh@21 -- # val= 00:07:47.486 10:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.486 10:38:04 -- accel/accel.sh@21 -- # val= 00:07:47.486 10:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.486 10:38:04 -- accel/accel.sh@21 -- # val= 00:07:47.486 10:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.486 10:38:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.486 10:38:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.486 10:38:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:47.486 10:38:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.486 00:07:47.486 real 0m2.833s 00:07:47.486 user 0m9.442s 00:07:47.486 sys 0m0.294s 00:07:47.486 10:38:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.486 10:38:04 -- common/autotest_common.sh@10 -- # set +x 00:07:47.486 ************************************ 00:07:47.486 END TEST accel_decomp_mcore 00:07:47.486 ************************************ 00:07:47.486 10:38:04 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.486 10:38:04 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:47.486 10:38:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.486 10:38:04 -- common/autotest_common.sh@10 -- # set +x 00:07:47.486 ************************************ 00:07:47.486 START TEST accel_decomp_full_mcore 00:07:47.486 ************************************ 00:07:47.486 10:38:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.486 10:38:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.486 10:38:04 -- accel/accel.sh@17 -- # local accel_module 00:07:47.486 10:38:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.486 10:38:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.486 10:38:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.486 10:38:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.486 10:38:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.486 10:38:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.486 10:38:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.486 10:38:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.486 10:38:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.486 10:38:04 -- accel/accel.sh@42 -- # jq -r . 00:07:47.486 [2024-07-10 10:38:04.054460] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:47.486 [2024-07-10 10:38:04.054542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350304 ] 00:07:47.486 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.486 [2024-07-10 10:38:04.118143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.486 [2024-07-10 10:38:04.216015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.486 [2024-07-10 10:38:04.216068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.486 [2024-07-10 10:38:04.216121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.486 [2024-07-10 10:38:04.216124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.860 10:38:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:48.860 00:07:48.860 SPDK Configuration: 00:07:48.860 Core mask: 0xf 00:07:48.860 00:07:48.860 Accel Perf Configuration: 00:07:48.860 Workload Type: decompress 00:07:48.860 Transfer size: 111250 bytes 00:07:48.860 Vector count 1 00:07:48.860 Module: software 00:07:48.860 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.860 Queue depth: 32 00:07:48.860 Allocate depth: 32 00:07:48.860 # threads/core: 1 00:07:48.860 Run time: 1 seconds 00:07:48.860 Verify: Yes 00:07:48.860 00:07:48.860 Running for 1 seconds... 00:07:48.860 00:07:48.860 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:48.860 ------------------------------------------------------------------------------------ 00:07:48.860 0,0 3776/s 155 MiB/s 0 0 00:07:48.860 3,0 3776/s 155 MiB/s 0 0 00:07:48.860 2,0 3776/s 155 MiB/s 0 0 00:07:48.860 1,0 3776/s 155 MiB/s 0 0 00:07:48.860 ==================================================================================== 00:07:48.860 Total 15104/s 1602 MiB/s 0 0' 00:07:48.860 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:48.860 10:38:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.860 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:48.860 10:38:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.860 10:38:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.860 10:38:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.860 10:38:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.860 10:38:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.860 10:38:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.860 10:38:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.860 10:38:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.860 10:38:05 -- accel/accel.sh@42 -- # jq -r . 00:07:48.860 [2024-07-10 10:38:05.481791] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:48.860 [2024-07-10 10:38:05.481869] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350451 ] 00:07:48.860 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.860 [2024-07-10 10:38:05.542049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.860 [2024-07-10 10:38:05.638634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.860 [2024-07-10 10:38:05.638689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.860 [2024-07-10 10:38:05.638741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.860 [2024-07-10 10:38:05.638744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val= 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val= 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val= 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val=0xf 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val= 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val= 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val=decompress 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val= 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val=software 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val=32 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val=32 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val=1 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val=Yes 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val= 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.119 10:38:05 -- accel/accel.sh@21 -- # val= 00:07:49.119 10:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.119 10:38:05 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 10:38:06 -- accel/accel.sh@21 -- # val= 00:07:50.493 10:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 10:38:06 -- accel/accel.sh@21 -- # val= 00:07:50.493 10:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 10:38:06 -- accel/accel.sh@21 -- # val= 00:07:50.493 10:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 10:38:06 -- accel/accel.sh@21 -- # val= 00:07:50.493 10:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 10:38:06 -- accel/accel.sh@21 -- # val= 00:07:50.493 10:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 10:38:06 -- accel/accel.sh@21 -- # val= 00:07:50.493 10:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 10:38:06 -- accel/accel.sh@21 -- # val= 00:07:50.493 10:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 10:38:06 -- accel/accel.sh@21 -- # val= 00:07:50.493 10:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 10:38:06 -- accel/accel.sh@21 -- # val= 00:07:50.493 10:38:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.493 10:38:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.493 10:38:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:50.493 10:38:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:50.493 10:38:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.493 00:07:50.493 real 0m2.865s 00:07:50.493 user 0m9.540s 00:07:50.493 sys 0m0.319s 00:07:50.493 10:38:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.493 10:38:06 -- common/autotest_common.sh@10 -- # set +x 00:07:50.493 ************************************ 00:07:50.493 END TEST accel_decomp_full_mcore 00:07:50.493 ************************************ 00:07:50.493 10:38:06 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.493 10:38:06 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:50.493 10:38:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.493 10:38:06 -- common/autotest_common.sh@10 -- # set +x 00:07:50.493 ************************************ 00:07:50.493 START TEST accel_decomp_mthread 00:07:50.493 ************************************ 00:07:50.493 10:38:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.493 10:38:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.493 10:38:06 -- accel/accel.sh@17 -- # local accel_module 00:07:50.493 10:38:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.493 10:38:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.493 10:38:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.493 10:38:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.493 10:38:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.493 10:38:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.493 10:38:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.493 10:38:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.494 10:38:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.494 10:38:06 -- accel/accel.sh@42 -- # jq -r . 00:07:50.494 [2024-07-10 10:38:06.949869] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:50.494 [2024-07-10 10:38:06.949965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350614 ] 00:07:50.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.494 [2024-07-10 10:38:07.013302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.494 [2024-07-10 10:38:07.105096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.867 10:38:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:51.867 00:07:51.867 SPDK Configuration: 00:07:51.867 Core mask: 0x1 00:07:51.867 00:07:51.867 Accel Perf Configuration: 00:07:51.867 Workload Type: decompress 00:07:51.867 Transfer size: 4096 bytes 00:07:51.867 Vector count 1 00:07:51.867 Module: software 00:07:51.867 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:51.867 Queue depth: 32 00:07:51.867 Allocate depth: 32 00:07:51.867 # threads/core: 2 00:07:51.867 Run time: 1 seconds 00:07:51.867 Verify: Yes 00:07:51.867 00:07:51.867 Running for 1 seconds... 00:07:51.867 00:07:51.867 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:51.867 ------------------------------------------------------------------------------------ 00:07:51.867 0,1 27936/s 51 MiB/s 0 0 00:07:51.867 0,0 27808/s 51 MiB/s 0 0 00:07:51.867 ==================================================================================== 00:07:51.867 Total 55744/s 217 MiB/s 0 0' 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 10:38:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 10:38:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.867 10:38:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.867 10:38:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.867 10:38:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.867 10:38:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.867 10:38:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.867 10:38:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.867 10:38:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.867 10:38:08 -- accel/accel.sh@42 -- # jq -r . 00:07:51.867 [2024-07-10 10:38:08.369513] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:51.867 [2024-07-10 10:38:08.369591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350767 ] 00:07:51.867 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.867 [2024-07-10 10:38:08.431541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.867 [2024-07-10 10:38:08.522739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.867 10:38:08 -- accel/accel.sh@21 -- # val= 00:07:51.867 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 10:38:08 -- accel/accel.sh@21 -- # val= 00:07:51.867 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 10:38:08 -- accel/accel.sh@21 -- # val= 00:07:51.867 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 10:38:08 -- accel/accel.sh@21 -- # val=0x1 00:07:51.867 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 10:38:08 -- accel/accel.sh@21 -- # val= 00:07:51.867 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.867 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.867 10:38:08 -- accel/accel.sh@21 -- # val= 00:07:51.867 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val=decompress 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val= 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val=software 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val=32 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val=32 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val=2 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val=Yes 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val= 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:51.868 10:38:08 -- accel/accel.sh@21 -- # val= 00:07:51.868 10:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # IFS=: 00:07:51.868 10:38:08 -- accel/accel.sh@20 -- # read -r var val 00:07:53.242 10:38:09 -- accel/accel.sh@21 -- # val= 00:07:53.242 10:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.242 10:38:09 -- accel/accel.sh@21 -- # val= 00:07:53.242 10:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.242 10:38:09 -- accel/accel.sh@21 -- # val= 00:07:53.242 10:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.242 10:38:09 -- accel/accel.sh@21 -- # val= 00:07:53.242 10:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.242 10:38:09 -- accel/accel.sh@21 -- # val= 00:07:53.242 10:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.242 10:38:09 -- accel/accel.sh@21 -- # val= 00:07:53.242 10:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.242 10:38:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.242 10:38:09 -- accel/accel.sh@21 -- # val= 00:07:53.242 10:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.243 10:38:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.243 10:38:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.243 10:38:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:53.243 10:38:09 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:53.243 10:38:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.243 00:07:53.243 real 0m2.832s 00:07:53.243 user 0m2.523s 00:07:53.243 sys 0m0.300s 00:07:53.243 10:38:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.243 10:38:09 -- common/autotest_common.sh@10 -- # set +x 00:07:53.243 ************************************ 00:07:53.243 END TEST accel_decomp_mthread 00:07:53.243 ************************************ 00:07:53.243 10:38:09 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:53.243 10:38:09 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:53.243 10:38:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.243 10:38:09 -- common/autotest_common.sh@10 -- # set +x 00:07:53.243 ************************************ 00:07:53.243 START TEST accel_deomp_full_mthread 00:07:53.243 ************************************ 00:07:53.243 10:38:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:53.243 10:38:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:53.243 10:38:09 -- accel/accel.sh@17 -- # local accel_module 00:07:53.243 10:38:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:53.243 10:38:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:53.243 10:38:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.243 10:38:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.243 10:38:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.243 10:38:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.243 10:38:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.243 10:38:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.243 10:38:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.243 10:38:09 -- accel/accel.sh@42 -- # jq -r . 00:07:53.243 [2024-07-10 10:38:09.806318] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:53.243 [2024-07-10 10:38:09.806401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351037 ] 00:07:53.243 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.243 [2024-07-10 10:38:09.871614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.243 [2024-07-10 10:38:09.965084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.616 10:38:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:54.616 00:07:54.616 SPDK Configuration: 00:07:54.616 Core mask: 0x1 00:07:54.616 00:07:54.616 Accel Perf Configuration: 00:07:54.616 Workload Type: decompress 00:07:54.616 Transfer size: 111250 bytes 00:07:54.616 Vector count 1 00:07:54.616 Module: software 00:07:54.616 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:54.616 Queue depth: 32 00:07:54.616 Allocate depth: 32 00:07:54.616 # threads/core: 2 00:07:54.616 Run time: 1 seconds 00:07:54.616 Verify: Yes 00:07:54.616 00:07:54.616 Running for 1 seconds... 00:07:54.616 00:07:54.616 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:54.616 ------------------------------------------------------------------------------------ 00:07:54.616 0,1 1952/s 80 MiB/s 0 0 00:07:54.616 0,0 1920/s 79 MiB/s 0 0 00:07:54.616 ==================================================================================== 00:07:54.616 Total 3872/s 410 MiB/s 0 0' 00:07:54.616 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.616 10:38:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.616 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.616 10:38:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.616 10:38:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.616 10:38:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.616 10:38:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.616 10:38:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.616 10:38:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.616 10:38:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.616 10:38:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.616 10:38:11 -- accel/accel.sh@42 -- # jq -r . 00:07:54.616 [2024-07-10 10:38:11.246010] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:54.616 [2024-07-10 10:38:11.246091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351181 ] 00:07:54.616 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.616 [2024-07-10 10:38:11.309956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.616 [2024-07-10 10:38:11.403140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val= 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val= 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val= 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val=0x1 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val= 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val= 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val=decompress 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val= 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val=software 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val=32 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val=32 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val=2 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val=Yes 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val= 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.874 10:38:11 -- accel/accel.sh@21 -- # val= 00:07:54.874 10:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # IFS=: 00:07:54.874 10:38:11 -- accel/accel.sh@20 -- # read -r var val 00:07:56.256 10:38:12 -- accel/accel.sh@21 -- # val= 00:07:56.257 10:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # IFS=: 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # read -r var val 00:07:56.257 10:38:12 -- accel/accel.sh@21 -- # val= 00:07:56.257 10:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # IFS=: 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # read -r var val 00:07:56.257 10:38:12 -- accel/accel.sh@21 -- # val= 00:07:56.257 10:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # IFS=: 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # read -r var val 00:07:56.257 10:38:12 -- accel/accel.sh@21 -- # val= 00:07:56.257 10:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # IFS=: 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # read -r var val 00:07:56.257 10:38:12 -- accel/accel.sh@21 -- # val= 00:07:56.257 10:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # IFS=: 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # read -r var val 00:07:56.257 10:38:12 -- accel/accel.sh@21 -- # val= 00:07:56.257 10:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # IFS=: 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # read -r var val 00:07:56.257 10:38:12 -- accel/accel.sh@21 -- # val= 00:07:56.257 10:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # IFS=: 00:07:56.257 10:38:12 -- accel/accel.sh@20 -- # read -r var val 00:07:56.257 10:38:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:56.257 10:38:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:56.257 10:38:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.257 00:07:56.257 real 0m2.893s 00:07:56.257 user 0m2.586s 00:07:56.257 sys 0m0.300s 00:07:56.257 10:38:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.257 10:38:12 -- common/autotest_common.sh@10 -- # set +x 00:07:56.257 ************************************ 00:07:56.257 END TEST accel_deomp_full_mthread 00:07:56.257 ************************************ 00:07:56.257 10:38:12 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:56.257 10:38:12 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:56.257 10:38:12 -- accel/accel.sh@129 -- # build_accel_config 00:07:56.257 10:38:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:56.257 10:38:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.257 10:38:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.257 10:38:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.257 10:38:12 -- common/autotest_common.sh@10 -- # set +x 00:07:56.257 10:38:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.257 10:38:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.257 10:38:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.257 10:38:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.257 10:38:12 -- accel/accel.sh@42 -- # jq -r . 00:07:56.257 ************************************ 00:07:56.257 START TEST accel_dif_functional_tests 00:07:56.257 ************************************ 00:07:56.257 10:38:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:56.257 [2024-07-10 10:38:12.746572] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:56.257 [2024-07-10 10:38:12.746656] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351336 ] 00:07:56.257 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.257 [2024-07-10 10:38:12.813410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.257 [2024-07-10 10:38:12.912201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.257 [2024-07-10 10:38:12.912256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.257 [2024-07-10 10:38:12.912259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.257 00:07:56.257 00:07:56.257 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.257 http://cunit.sourceforge.net/ 00:07:56.257 00:07:56.257 00:07:56.257 Suite: accel_dif 00:07:56.257 Test: verify: DIF generated, GUARD check ...passed 00:07:56.257 Test: verify: DIF generated, APPTAG check ...passed 00:07:56.257 Test: verify: DIF generated, REFTAG check ...passed 00:07:56.257 Test: verify: DIF not generated, GUARD check ...[2024-07-10 10:38:13.006442] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:56.257 [2024-07-10 10:38:13.006509] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:56.257 passed 00:07:56.257 Test: verify: DIF not generated, APPTAG check ...[2024-07-10 10:38:13.006553] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:56.257 [2024-07-10 10:38:13.006583] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:56.257 passed 00:07:56.257 Test: verify: DIF not generated, REFTAG check ...[2024-07-10 10:38:13.006619] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:56.257 [2024-07-10 10:38:13.006659] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:56.257 passed 00:07:56.258 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:56.258 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-10 10:38:13.006731] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:56.258 passed 00:07:56.258 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:56.258 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:56.258 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:56.258 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-10 10:38:13.006890] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:56.258 passed 00:07:56.258 Test: generate copy: DIF generated, GUARD check ...passed 00:07:56.258 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:56.258 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:56.258 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:56.258 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:56.258 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:56.258 Test: generate copy: iovecs-len validate ...[2024-07-10 10:38:13.007146] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:56.258 passed 00:07:56.258 Test: generate copy: buffer alignment validate ...passed 00:07:56.258 00:07:56.258 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.258 suites 1 1 n/a 0 0 00:07:56.258 tests 20 20 20 0 0 00:07:56.258 asserts 204 204 204 0 n/a 00:07:56.258 00:07:56.258 Elapsed time = 0.003 seconds 00:07:56.516 00:07:56.516 real 0m0.516s 00:07:56.516 user 0m0.797s 00:07:56.516 sys 0m0.182s 00:07:56.516 10:38:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.516 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:56.516 ************************************ 00:07:56.516 END TEST accel_dif_functional_tests 00:07:56.516 ************************************ 00:07:56.516 00:07:56.516 real 0m59.832s 00:07:56.516 user 1m7.609s 00:07:56.516 sys 0m7.225s 00:07:56.516 10:38:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.516 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:56.516 ************************************ 00:07:56.516 END TEST accel 00:07:56.516 ************************************ 00:07:56.516 10:38:13 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:56.516 10:38:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:56.516 10:38:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.516 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:56.516 ************************************ 00:07:56.516 START TEST accel_rpc 00:07:56.516 ************************************ 00:07:56.516 10:38:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:56.516 * Looking for test storage... 00:07:56.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:56.516 10:38:13 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:56.516 10:38:13 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3351524 00:07:56.516 10:38:13 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:56.516 10:38:13 -- accel/accel_rpc.sh@15 -- # waitforlisten 3351524 00:07:56.516 10:38:13 -- common/autotest_common.sh@819 -- # '[' -z 3351524 ']' 00:07:56.516 10:38:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.516 10:38:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:56.516 10:38:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.516 10:38:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:56.516 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:56.774 [2024-07-10 10:38:13.373262] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:56.774 [2024-07-10 10:38:13.373353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351524 ] 00:07:56.774 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.774 [2024-07-10 10:38:13.429690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.774 [2024-07-10 10:38:13.514625] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.774 [2024-07-10 10:38:13.514788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.774 10:38:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:56.774 10:38:13 -- common/autotest_common.sh@852 -- # return 0 00:07:56.774 10:38:13 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:56.774 10:38:13 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:56.774 10:38:13 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:56.774 10:38:13 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:56.774 10:38:13 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:56.774 10:38:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:56.774 10:38:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.774 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:56.774 ************************************ 00:07:56.774 START TEST accel_assign_opcode 00:07:56.774 ************************************ 00:07:56.774 10:38:13 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:56.774 10:38:13 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:56.774 10:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.774 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:56.774 [2024-07-10 10:38:13.583343] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:56.774 10:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.774 10:38:13 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:56.774 10:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.774 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:56.774 [2024-07-10 10:38:13.591357] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:56.774 10:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.774 10:38:13 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:56.774 10:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.774 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:57.032 10:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.032 10:38:13 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:57.032 10:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.032 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:57.032 10:38:13 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:57.032 10:38:13 -- accel/accel_rpc.sh@42 -- # grep software 00:07:57.032 10:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.290 software 00:07:57.290 00:07:57.290 real 0m0.300s 00:07:57.290 user 0m0.042s 00:07:57.290 sys 0m0.006s 00:07:57.290 10:38:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.290 10:38:13 -- common/autotest_common.sh@10 -- # set +x 00:07:57.290 ************************************ 00:07:57.290 END TEST accel_assign_opcode 00:07:57.290 ************************************ 00:07:57.290 10:38:13 -- accel/accel_rpc.sh@55 -- # killprocess 3351524 00:07:57.290 10:38:13 -- common/autotest_common.sh@926 -- # '[' -z 3351524 ']' 00:07:57.290 10:38:13 -- common/autotest_common.sh@930 -- # kill -0 3351524 00:07:57.291 10:38:13 -- common/autotest_common.sh@931 -- # uname 00:07:57.291 10:38:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:57.291 10:38:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3351524 00:07:57.291 10:38:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:57.291 10:38:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:57.291 10:38:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3351524' 00:07:57.291 killing process with pid 3351524 00:07:57.291 10:38:13 -- common/autotest_common.sh@945 -- # kill 3351524 00:07:57.291 10:38:13 -- common/autotest_common.sh@950 -- # wait 3351524 00:07:57.549 00:07:57.549 real 0m1.057s 00:07:57.549 user 0m0.991s 00:07:57.549 sys 0m0.399s 00:07:57.549 10:38:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.549 10:38:14 -- common/autotest_common.sh@10 -- # set +x 00:07:57.549 ************************************ 00:07:57.549 END TEST accel_rpc 00:07:57.549 ************************************ 00:07:57.549 10:38:14 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:57.549 10:38:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.549 10:38:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.549 10:38:14 -- common/autotest_common.sh@10 -- # set +x 00:07:57.549 ************************************ 00:07:57.549 START TEST app_cmdline 00:07:57.549 ************************************ 00:07:57.549 10:38:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:57.807 * Looking for test storage... 00:07:57.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:57.807 10:38:14 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:57.807 10:38:14 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3351729 00:07:57.807 10:38:14 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:57.807 10:38:14 -- app/cmdline.sh@18 -- # waitforlisten 3351729 00:07:57.807 10:38:14 -- common/autotest_common.sh@819 -- # '[' -z 3351729 ']' 00:07:57.807 10:38:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.807 10:38:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:57.807 10:38:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.807 10:38:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:57.807 10:38:14 -- common/autotest_common.sh@10 -- # set +x 00:07:57.807 [2024-07-10 10:38:14.465983] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:57.807 [2024-07-10 10:38:14.466071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351729 ] 00:07:57.807 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.807 [2024-07-10 10:38:14.527481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.807 [2024-07-10 10:38:14.618726] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:57.807 [2024-07-10 10:38:14.618884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.738 10:38:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:58.738 10:38:15 -- common/autotest_common.sh@852 -- # return 0 00:07:58.738 10:38:15 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:58.995 { 00:07:58.995 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:07:58.995 "fields": { 00:07:58.995 "major": 24, 00:07:58.995 "minor": 1, 00:07:58.995 "patch": 1, 00:07:58.995 "suffix": "-pre", 00:07:58.995 "commit": "4b94202c6" 00:07:58.995 } 00:07:58.995 } 00:07:58.995 10:38:15 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:58.995 10:38:15 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:58.995 10:38:15 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:58.995 10:38:15 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:58.995 10:38:15 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:58.995 10:38:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.995 10:38:15 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:58.995 10:38:15 -- common/autotest_common.sh@10 -- # set +x 00:07:58.995 10:38:15 -- app/cmdline.sh@26 -- # sort 00:07:58.995 10:38:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.995 10:38:15 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:58.995 10:38:15 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:58.995 10:38:15 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.995 10:38:15 -- common/autotest_common.sh@640 -- # local es=0 00:07:58.995 10:38:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.995 10:38:15 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.995 10:38:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.995 10:38:15 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.995 10:38:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.995 10:38:15 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.995 10:38:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.995 10:38:15 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.995 10:38:15 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:58.995 10:38:15 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.252 request: 00:07:59.252 { 00:07:59.252 "method": "env_dpdk_get_mem_stats", 00:07:59.252 "req_id": 1 00:07:59.252 } 00:07:59.252 Got JSON-RPC error response 00:07:59.252 response: 00:07:59.252 { 00:07:59.252 "code": -32601, 00:07:59.252 "message": "Method not found" 00:07:59.252 } 00:07:59.252 10:38:15 -- common/autotest_common.sh@643 -- # es=1 00:07:59.252 10:38:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:59.252 10:38:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:59.253 10:38:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:59.253 10:38:15 -- app/cmdline.sh@1 -- # killprocess 3351729 00:07:59.253 10:38:15 -- common/autotest_common.sh@926 -- # '[' -z 3351729 ']' 00:07:59.253 10:38:15 -- common/autotest_common.sh@930 -- # kill -0 3351729 00:07:59.253 10:38:15 -- common/autotest_common.sh@931 -- # uname 00:07:59.253 10:38:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:59.253 10:38:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3351729 00:07:59.253 10:38:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:59.253 10:38:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:59.253 10:38:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3351729' 00:07:59.253 killing process with pid 3351729 00:07:59.253 10:38:15 -- common/autotest_common.sh@945 -- # kill 3351729 00:07:59.253 10:38:15 -- common/autotest_common.sh@950 -- # wait 3351729 00:07:59.818 00:07:59.818 real 0m1.985s 00:07:59.818 user 0m2.481s 00:07:59.818 sys 0m0.475s 00:07:59.819 10:38:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.819 10:38:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.819 ************************************ 00:07:59.819 END TEST app_cmdline 00:07:59.819 ************************************ 00:07:59.819 10:38:16 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:59.819 10:38:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:59.819 10:38:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.819 10:38:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.819 ************************************ 00:07:59.819 START TEST version 00:07:59.819 ************************************ 00:07:59.819 10:38:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:59.819 * Looking for test storage... 00:07:59.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:59.819 10:38:16 -- app/version.sh@17 -- # get_header_version major 00:07:59.819 10:38:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:59.819 10:38:16 -- app/version.sh@14 -- # cut -f2 00:07:59.819 10:38:16 -- app/version.sh@14 -- # tr -d '"' 00:07:59.819 10:38:16 -- app/version.sh@17 -- # major=24 00:07:59.819 10:38:16 -- app/version.sh@18 -- # get_header_version minor 00:07:59.819 10:38:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:59.819 10:38:16 -- app/version.sh@14 -- # cut -f2 00:07:59.819 10:38:16 -- app/version.sh@14 -- # tr -d '"' 00:07:59.819 10:38:16 -- app/version.sh@18 -- # minor=1 00:07:59.819 10:38:16 -- app/version.sh@19 -- # get_header_version patch 00:07:59.819 10:38:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:59.819 10:38:16 -- app/version.sh@14 -- # cut -f2 00:07:59.819 10:38:16 -- app/version.sh@14 -- # tr -d '"' 00:07:59.819 10:38:16 -- app/version.sh@19 -- # patch=1 00:07:59.819 10:38:16 -- app/version.sh@20 -- # get_header_version suffix 00:07:59.819 10:38:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:59.819 10:38:16 -- app/version.sh@14 -- # cut -f2 00:07:59.819 10:38:16 -- app/version.sh@14 -- # tr -d '"' 00:07:59.819 10:38:16 -- app/version.sh@20 -- # suffix=-pre 00:07:59.819 10:38:16 -- app/version.sh@22 -- # version=24.1 00:07:59.819 10:38:16 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:59.819 10:38:16 -- app/version.sh@25 -- # version=24.1.1 00:07:59.819 10:38:16 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:59.819 10:38:16 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:59.819 10:38:16 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:59.819 10:38:16 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:59.819 10:38:16 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:59.819 00:07:59.819 real 0m0.102s 00:07:59.819 user 0m0.054s 00:07:59.819 sys 0m0.068s 00:07:59.819 10:38:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.819 10:38:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.819 ************************************ 00:07:59.819 END TEST version 00:07:59.819 ************************************ 00:07:59.819 10:38:16 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:59.819 10:38:16 -- spdk/autotest.sh@204 -- # uname -s 00:07:59.819 10:38:16 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:59.819 10:38:16 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:59.819 10:38:16 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:59.819 10:38:16 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:59.819 10:38:16 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:59.819 10:38:16 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:59.819 10:38:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:59.819 10:38:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.819 10:38:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:59.819 10:38:16 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:59.819 10:38:16 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:59.819 10:38:16 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:59.819 10:38:16 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:59.819 10:38:16 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:59.819 10:38:16 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:59.819 10:38:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:59.819 10:38:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.819 10:38:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.819 ************************************ 00:07:59.819 START TEST nvmf_tcp 00:07:59.819 ************************************ 00:07:59.819 10:38:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:59.819 * Looking for test storage... 00:07:59.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:59.819 10:38:16 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:59.819 10:38:16 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:59.819 10:38:16 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.819 10:38:16 -- nvmf/common.sh@7 -- # uname -s 00:07:59.819 10:38:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.819 10:38:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.819 10:38:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.819 10:38:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.819 10:38:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.819 10:38:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.819 10:38:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.819 10:38:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.819 10:38:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.819 10:38:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.819 10:38:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.819 10:38:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.819 10:38:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.819 10:38:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.819 10:38:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.819 10:38:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.819 10:38:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.819 10:38:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.819 10:38:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.819 10:38:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.819 10:38:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.819 10:38:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.819 10:38:16 -- paths/export.sh@5 -- # export PATH 00:07:59.819 10:38:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.819 10:38:16 -- nvmf/common.sh@46 -- # : 0 00:07:59.819 10:38:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:59.819 10:38:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:59.819 10:38:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:59.819 10:38:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.819 10:38:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.819 10:38:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:59.819 10:38:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:59.819 10:38:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:59.819 10:38:16 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:59.819 10:38:16 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:59.819 10:38:16 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:59.819 10:38:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:59.819 10:38:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.819 10:38:16 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:59.819 10:38:16 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:59.819 10:38:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:59.819 10:38:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.819 10:38:16 -- common/autotest_common.sh@10 -- # set +x 00:07:59.819 ************************************ 00:07:59.819 START TEST nvmf_example 00:07:59.819 ************************************ 00:07:59.819 10:38:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:59.819 * Looking for test storage... 00:07:59.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.819 10:38:16 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.819 10:38:16 -- nvmf/common.sh@7 -- # uname -s 00:07:59.819 10:38:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.820 10:38:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.820 10:38:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.820 10:38:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.820 10:38:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.820 10:38:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.820 10:38:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.820 10:38:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.820 10:38:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.820 10:38:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.078 10:38:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.078 10:38:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:00.078 10:38:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.078 10:38:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.078 10:38:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.078 10:38:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.078 10:38:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.078 10:38:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.078 10:38:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.078 10:38:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.078 10:38:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.078 10:38:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.078 10:38:16 -- paths/export.sh@5 -- # export PATH 00:08:00.078 10:38:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.078 10:38:16 -- nvmf/common.sh@46 -- # : 0 00:08:00.078 10:38:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:00.078 10:38:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:00.078 10:38:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:00.078 10:38:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.078 10:38:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.078 10:38:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:00.078 10:38:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:00.078 10:38:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:00.078 10:38:16 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:00.078 10:38:16 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:00.078 10:38:16 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:00.078 10:38:16 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:00.078 10:38:16 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:00.078 10:38:16 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:00.078 10:38:16 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:00.078 10:38:16 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:00.078 10:38:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:00.078 10:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:00.078 10:38:16 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:00.078 10:38:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:00.078 10:38:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.078 10:38:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:00.078 10:38:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:00.078 10:38:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:00.078 10:38:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.078 10:38:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.078 10:38:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.078 10:38:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:00.078 10:38:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:00.078 10:38:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:00.078 10:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:01.978 10:38:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:01.978 10:38:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:01.978 10:38:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:01.978 10:38:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:01.978 10:38:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:01.978 10:38:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:01.978 10:38:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:01.978 10:38:18 -- nvmf/common.sh@294 -- # net_devs=() 00:08:01.978 10:38:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:01.978 10:38:18 -- nvmf/common.sh@295 -- # e810=() 00:08:01.978 10:38:18 -- nvmf/common.sh@295 -- # local -ga e810 00:08:01.978 10:38:18 -- nvmf/common.sh@296 -- # x722=() 00:08:01.978 10:38:18 -- nvmf/common.sh@296 -- # local -ga x722 00:08:01.978 10:38:18 -- nvmf/common.sh@297 -- # mlx=() 00:08:01.978 10:38:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:01.978 10:38:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.978 10:38:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.978 10:38:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.978 10:38:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.978 10:38:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.978 10:38:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.978 10:38:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.978 10:38:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.978 10:38:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.978 10:38:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.978 10:38:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.978 10:38:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:01.978 10:38:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:01.978 10:38:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:01.978 10:38:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:01.978 10:38:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:01.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:01.978 10:38:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:01.978 10:38:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:01.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:01.978 10:38:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:01.978 10:38:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:01.978 10:38:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.978 10:38:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:01.978 10:38:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.978 10:38:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:01.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:01.978 10:38:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.978 10:38:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:01.978 10:38:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.978 10:38:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:01.978 10:38:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.978 10:38:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:01.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:01.978 10:38:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.978 10:38:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:01.978 10:38:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:01.978 10:38:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:01.978 10:38:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.978 10:38:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.978 10:38:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.978 10:38:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:01.978 10:38:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.978 10:38:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.978 10:38:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:01.978 10:38:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.978 10:38:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.978 10:38:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:01.978 10:38:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:01.978 10:38:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.978 10:38:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.978 10:38:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.978 10:38:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.978 10:38:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:01.978 10:38:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.978 10:38:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.978 10:38:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.978 10:38:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:01.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:08:01.978 00:08:01.978 --- 10.0.0.2 ping statistics --- 00:08:01.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.978 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:08:01.978 10:38:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:01.978 00:08:01.978 --- 10.0.0.1 ping statistics --- 00:08:01.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.978 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:01.978 10:38:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.978 10:38:18 -- nvmf/common.sh@410 -- # return 0 00:08:01.978 10:38:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:01.978 10:38:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.978 10:38:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:01.978 10:38:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.978 10:38:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:01.978 10:38:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:01.978 10:38:18 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:01.978 10:38:18 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:01.978 10:38:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:01.978 10:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:01.978 10:38:18 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:01.978 10:38:18 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:01.978 10:38:18 -- target/nvmf_example.sh@34 -- # nvmfpid=3353756 00:08:01.978 10:38:18 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:01.978 10:38:18 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:01.978 10:38:18 -- target/nvmf_example.sh@36 -- # waitforlisten 3353756 00:08:01.978 10:38:18 -- common/autotest_common.sh@819 -- # '[' -z 3353756 ']' 00:08:01.978 10:38:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.978 10:38:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:01.978 10:38:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.978 10:38:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:01.978 10:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.236 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.169 10:38:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:03.169 10:38:19 -- common/autotest_common.sh@852 -- # return 0 00:08:03.169 10:38:19 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:03.169 10:38:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:03.169 10:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:03.169 10:38:19 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.169 10:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.169 10:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:03.169 10:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.169 10:38:19 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:03.169 10:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.169 10:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:03.169 10:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.169 10:38:19 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:03.169 10:38:19 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.169 10:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.169 10:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:03.169 10:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.169 10:38:19 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:03.169 10:38:19 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.169 10:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.169 10:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:03.169 10:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.169 10:38:19 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.169 10:38:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.169 10:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:03.169 10:38:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.169 10:38:19 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:03.169 10:38:19 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:03.169 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.360 Initializing NVMe Controllers 00:08:15.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:15.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:15.360 Initialization complete. Launching workers. 00:08:15.360 ======================================================== 00:08:15.360 Latency(us) 00:08:15.360 Device Information : IOPS MiB/s Average min max 00:08:15.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15065.63 58.85 4247.58 863.33 16292.76 00:08:15.360 ======================================================== 00:08:15.360 Total : 15065.63 58.85 4247.58 863.33 16292.76 00:08:15.360 00:08:15.360 10:38:30 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:15.360 10:38:30 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:15.360 10:38:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:15.360 10:38:30 -- nvmf/common.sh@116 -- # sync 00:08:15.360 10:38:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:15.360 10:38:30 -- nvmf/common.sh@119 -- # set +e 00:08:15.360 10:38:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:15.360 10:38:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:15.360 rmmod nvme_tcp 00:08:15.360 rmmod nvme_fabrics 00:08:15.360 rmmod nvme_keyring 00:08:15.360 10:38:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:15.360 10:38:30 -- nvmf/common.sh@123 -- # set -e 00:08:15.360 10:38:30 -- nvmf/common.sh@124 -- # return 0 00:08:15.360 10:38:30 -- nvmf/common.sh@477 -- # '[' -n 3353756 ']' 00:08:15.360 10:38:30 -- nvmf/common.sh@478 -- # killprocess 3353756 00:08:15.360 10:38:30 -- common/autotest_common.sh@926 -- # '[' -z 3353756 ']' 00:08:15.360 10:38:30 -- common/autotest_common.sh@930 -- # kill -0 3353756 00:08:15.360 10:38:30 -- common/autotest_common.sh@931 -- # uname 00:08:15.360 10:38:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:15.360 10:38:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3353756 00:08:15.360 10:38:30 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:15.360 10:38:30 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:15.360 10:38:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3353756' 00:08:15.360 killing process with pid 3353756 00:08:15.360 10:38:30 -- common/autotest_common.sh@945 -- # kill 3353756 00:08:15.360 10:38:30 -- common/autotest_common.sh@950 -- # wait 3353756 00:08:15.360 nvmf threads initialize successfully 00:08:15.360 bdev subsystem init successfully 00:08:15.360 created a nvmf target service 00:08:15.360 create targets's poll groups done 00:08:15.360 all subsystems of target started 00:08:15.360 nvmf target is running 00:08:15.360 all subsystems of target stopped 00:08:15.360 destroy targets's poll groups done 00:08:15.360 destroyed the nvmf target service 00:08:15.360 bdev subsystem finish successfully 00:08:15.360 nvmf threads destroy successfully 00:08:15.360 10:38:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:15.360 10:38:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:15.360 10:38:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:15.360 10:38:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.360 10:38:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:15.360 10:38:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.360 10:38:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.360 10:38:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.930 10:38:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:15.930 10:38:32 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:15.930 10:38:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:15.930 10:38:32 -- common/autotest_common.sh@10 -- # set +x 00:08:15.930 00:08:15.930 real 0m15.894s 00:08:15.930 user 0m45.293s 00:08:15.930 sys 0m3.260s 00:08:15.930 10:38:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.930 10:38:32 -- common/autotest_common.sh@10 -- # set +x 00:08:15.930 ************************************ 00:08:15.930 END TEST nvmf_example 00:08:15.930 ************************************ 00:08:15.930 10:38:32 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:15.930 10:38:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:15.930 10:38:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.930 10:38:32 -- common/autotest_common.sh@10 -- # set +x 00:08:15.930 ************************************ 00:08:15.930 START TEST nvmf_filesystem 00:08:15.930 ************************************ 00:08:15.930 10:38:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:15.930 * Looking for test storage... 00:08:15.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.930 10:38:32 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:15.930 10:38:32 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:15.930 10:38:32 -- common/autotest_common.sh@34 -- # set -e 00:08:15.930 10:38:32 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:15.930 10:38:32 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:15.930 10:38:32 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:15.930 10:38:32 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:15.930 10:38:32 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:15.930 10:38:32 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:15.930 10:38:32 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:15.930 10:38:32 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:15.930 10:38:32 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:15.930 10:38:32 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:15.930 10:38:32 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:15.930 10:38:32 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:15.930 10:38:32 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:15.930 10:38:32 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:15.930 10:38:32 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:15.930 10:38:32 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:15.930 10:38:32 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:15.930 10:38:32 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:15.930 10:38:32 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:15.930 10:38:32 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:15.930 10:38:32 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:15.930 10:38:32 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:15.930 10:38:32 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:15.930 10:38:32 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:15.930 10:38:32 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:15.930 10:38:32 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:15.930 10:38:32 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:15.930 10:38:32 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:15.930 10:38:32 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:15.930 10:38:32 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:15.930 10:38:32 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:15.930 10:38:32 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:15.930 10:38:32 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:15.930 10:38:32 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:15.930 10:38:32 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:15.930 10:38:32 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:15.930 10:38:32 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:15.930 10:38:32 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:15.930 10:38:32 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:15.930 10:38:32 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.930 10:38:32 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:15.930 10:38:32 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:15.930 10:38:32 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:15.930 10:38:32 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:15.930 10:38:32 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:15.930 10:38:32 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:15.930 10:38:32 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:15.930 10:38:32 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:15.930 10:38:32 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:15.930 10:38:32 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:15.930 10:38:32 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:15.930 10:38:32 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:15.930 10:38:32 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:15.930 10:38:32 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:15.930 10:38:32 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:15.930 10:38:32 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:15.930 10:38:32 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:15.930 10:38:32 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:15.930 10:38:32 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:15.930 10:38:32 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:15.930 10:38:32 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:15.930 10:38:32 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:15.930 10:38:32 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:15.930 10:38:32 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:15.930 10:38:32 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.930 10:38:32 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:15.930 10:38:32 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:15.930 10:38:32 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:15.930 10:38:32 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:15.930 10:38:32 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:15.930 10:38:32 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:15.930 10:38:32 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:15.930 10:38:32 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:15.930 10:38:32 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:15.930 10:38:32 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:15.930 10:38:32 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:15.930 10:38:32 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:15.930 10:38:32 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:15.930 10:38:32 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:15.930 10:38:32 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:15.930 10:38:32 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:15.930 10:38:32 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:15.930 10:38:32 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:15.931 10:38:32 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:15.931 10:38:32 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:15.931 10:38:32 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:15.931 10:38:32 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:15.931 10:38:32 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:15.931 10:38:32 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.931 10:38:32 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:15.931 10:38:32 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.931 10:38:32 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:15.931 10:38:32 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:15.931 10:38:32 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:15.931 10:38:32 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:15.931 10:38:32 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:15.931 10:38:32 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:15.931 10:38:32 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:15.931 10:38:32 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:15.931 #define SPDK_CONFIG_H 00:08:15.931 #define SPDK_CONFIG_APPS 1 00:08:15.931 #define SPDK_CONFIG_ARCH native 00:08:15.931 #undef SPDK_CONFIG_ASAN 00:08:15.931 #undef SPDK_CONFIG_AVAHI 00:08:15.931 #undef SPDK_CONFIG_CET 00:08:15.931 #define SPDK_CONFIG_COVERAGE 1 00:08:15.931 #define SPDK_CONFIG_CROSS_PREFIX 00:08:15.931 #undef SPDK_CONFIG_CRYPTO 00:08:15.931 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:15.931 #undef SPDK_CONFIG_CUSTOMOCF 00:08:15.931 #undef SPDK_CONFIG_DAOS 00:08:15.931 #define SPDK_CONFIG_DAOS_DIR 00:08:15.931 #define SPDK_CONFIG_DEBUG 1 00:08:15.931 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:15.931 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.931 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:15.931 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.931 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:15.931 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:15.931 #define SPDK_CONFIG_EXAMPLES 1 00:08:15.931 #undef SPDK_CONFIG_FC 00:08:15.931 #define SPDK_CONFIG_FC_PATH 00:08:15.931 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:15.931 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:15.931 #undef SPDK_CONFIG_FUSE 00:08:15.931 #undef SPDK_CONFIG_FUZZER 00:08:15.931 #define SPDK_CONFIG_FUZZER_LIB 00:08:15.931 #undef SPDK_CONFIG_GOLANG 00:08:15.931 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:15.931 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:15.931 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:15.931 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:15.931 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:15.931 #define SPDK_CONFIG_IDXD 1 00:08:15.931 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:15.931 #undef SPDK_CONFIG_IPSEC_MB 00:08:15.931 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:15.931 #define SPDK_CONFIG_ISAL 1 00:08:15.931 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:15.931 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:15.931 #define SPDK_CONFIG_LIBDIR 00:08:15.931 #undef SPDK_CONFIG_LTO 00:08:15.931 #define SPDK_CONFIG_MAX_LCORES 00:08:15.931 #define SPDK_CONFIG_NVME_CUSE 1 00:08:15.931 #undef SPDK_CONFIG_OCF 00:08:15.931 #define SPDK_CONFIG_OCF_PATH 00:08:15.931 #define SPDK_CONFIG_OPENSSL_PATH 00:08:15.931 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:15.931 #undef SPDK_CONFIG_PGO_USE 00:08:15.931 #define SPDK_CONFIG_PREFIX /usr/local 00:08:15.931 #undef SPDK_CONFIG_RAID5F 00:08:15.931 #undef SPDK_CONFIG_RBD 00:08:15.931 #define SPDK_CONFIG_RDMA 1 00:08:15.931 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:15.931 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:15.931 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:15.931 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:15.931 #define SPDK_CONFIG_SHARED 1 00:08:15.931 #undef SPDK_CONFIG_SMA 00:08:15.931 #define SPDK_CONFIG_TESTS 1 00:08:15.931 #undef SPDK_CONFIG_TSAN 00:08:15.931 #define SPDK_CONFIG_UBLK 1 00:08:15.931 #define SPDK_CONFIG_UBSAN 1 00:08:15.931 #undef SPDK_CONFIG_UNIT_TESTS 00:08:15.931 #undef SPDK_CONFIG_URING 00:08:15.931 #define SPDK_CONFIG_URING_PATH 00:08:15.931 #undef SPDK_CONFIG_URING_ZNS 00:08:15.931 #undef SPDK_CONFIG_USDT 00:08:15.931 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:15.931 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:15.931 #define SPDK_CONFIG_VFIO_USER 1 00:08:15.931 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:15.931 #define SPDK_CONFIG_VHOST 1 00:08:15.931 #define SPDK_CONFIG_VIRTIO 1 00:08:15.931 #undef SPDK_CONFIG_VTUNE 00:08:15.931 #define SPDK_CONFIG_VTUNE_DIR 00:08:15.931 #define SPDK_CONFIG_WERROR 1 00:08:15.931 #define SPDK_CONFIG_WPDK_DIR 00:08:15.931 #undef SPDK_CONFIG_XNVME 00:08:15.931 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:15.931 10:38:32 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:15.931 10:38:32 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.931 10:38:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.931 10:38:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.931 10:38:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.931 10:38:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.931 10:38:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.931 10:38:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.931 10:38:32 -- paths/export.sh@5 -- # export PATH 00:08:15.931 10:38:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.931 10:38:32 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:15.931 10:38:32 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:15.931 10:38:32 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:15.931 10:38:32 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:15.931 10:38:32 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:15.931 10:38:32 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:15.931 10:38:32 -- pm/common@16 -- # TEST_TAG=N/A 00:08:15.931 10:38:32 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:15.931 10:38:32 -- common/autotest_common.sh@52 -- # : 1 00:08:15.931 10:38:32 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:15.931 10:38:32 -- common/autotest_common.sh@56 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:15.931 10:38:32 -- common/autotest_common.sh@58 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:15.931 10:38:32 -- common/autotest_common.sh@60 -- # : 1 00:08:15.931 10:38:32 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:15.931 10:38:32 -- common/autotest_common.sh@62 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:15.931 10:38:32 -- common/autotest_common.sh@64 -- # : 00:08:15.931 10:38:32 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:15.931 10:38:32 -- common/autotest_common.sh@66 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:15.931 10:38:32 -- common/autotest_common.sh@68 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:15.931 10:38:32 -- common/autotest_common.sh@70 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:15.931 10:38:32 -- common/autotest_common.sh@72 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:15.931 10:38:32 -- common/autotest_common.sh@74 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:15.931 10:38:32 -- common/autotest_common.sh@76 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:15.931 10:38:32 -- common/autotest_common.sh@78 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:15.931 10:38:32 -- common/autotest_common.sh@80 -- # : 1 00:08:15.931 10:38:32 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:15.931 10:38:32 -- common/autotest_common.sh@82 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:15.931 10:38:32 -- common/autotest_common.sh@84 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:15.931 10:38:32 -- common/autotest_common.sh@86 -- # : 1 00:08:15.931 10:38:32 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:15.931 10:38:32 -- common/autotest_common.sh@88 -- # : 1 00:08:15.931 10:38:32 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:15.931 10:38:32 -- common/autotest_common.sh@90 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:15.931 10:38:32 -- common/autotest_common.sh@92 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:15.931 10:38:32 -- common/autotest_common.sh@94 -- # : 0 00:08:15.931 10:38:32 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:15.931 10:38:32 -- common/autotest_common.sh@96 -- # : tcp 00:08:15.932 10:38:32 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:15.932 10:38:32 -- common/autotest_common.sh@98 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:15.932 10:38:32 -- common/autotest_common.sh@100 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:15.932 10:38:32 -- common/autotest_common.sh@102 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:15.932 10:38:32 -- common/autotest_common.sh@104 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:15.932 10:38:32 -- common/autotest_common.sh@106 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:15.932 10:38:32 -- common/autotest_common.sh@108 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:15.932 10:38:32 -- common/autotest_common.sh@110 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:15.932 10:38:32 -- common/autotest_common.sh@112 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:15.932 10:38:32 -- common/autotest_common.sh@114 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:15.932 10:38:32 -- common/autotest_common.sh@116 -- # : 1 00:08:15.932 10:38:32 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:15.932 10:38:32 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.932 10:38:32 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:15.932 10:38:32 -- common/autotest_common.sh@120 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:15.932 10:38:32 -- common/autotest_common.sh@122 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:15.932 10:38:32 -- common/autotest_common.sh@124 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:15.932 10:38:32 -- common/autotest_common.sh@126 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:15.932 10:38:32 -- common/autotest_common.sh@128 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:15.932 10:38:32 -- common/autotest_common.sh@130 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:15.932 10:38:32 -- common/autotest_common.sh@132 -- # : v22.11.4 00:08:15.932 10:38:32 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:15.932 10:38:32 -- common/autotest_common.sh@134 -- # : true 00:08:15.932 10:38:32 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:15.932 10:38:32 -- common/autotest_common.sh@136 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:15.932 10:38:32 -- common/autotest_common.sh@138 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:15.932 10:38:32 -- common/autotest_common.sh@140 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:15.932 10:38:32 -- common/autotest_common.sh@142 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:15.932 10:38:32 -- common/autotest_common.sh@144 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:15.932 10:38:32 -- common/autotest_common.sh@146 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:15.932 10:38:32 -- common/autotest_common.sh@148 -- # : e810 00:08:15.932 10:38:32 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:15.932 10:38:32 -- common/autotest_common.sh@150 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:15.932 10:38:32 -- common/autotest_common.sh@152 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:15.932 10:38:32 -- common/autotest_common.sh@154 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:15.932 10:38:32 -- common/autotest_common.sh@156 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:15.932 10:38:32 -- common/autotest_common.sh@158 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:15.932 10:38:32 -- common/autotest_common.sh@160 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:15.932 10:38:32 -- common/autotest_common.sh@163 -- # : 00:08:15.932 10:38:32 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:15.932 10:38:32 -- common/autotest_common.sh@165 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:15.932 10:38:32 -- common/autotest_common.sh@167 -- # : 0 00:08:15.932 10:38:32 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:15.932 10:38:32 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:15.932 10:38:32 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:15.932 10:38:32 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.932 10:38:32 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.932 10:38:32 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.932 10:38:32 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.932 10:38:32 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.932 10:38:32 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.932 10:38:32 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:15.932 10:38:32 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:15.932 10:38:32 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:15.932 10:38:32 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:15.932 10:38:32 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:15.932 10:38:32 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:15.932 10:38:32 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:15.932 10:38:32 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:15.932 10:38:32 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:15.932 10:38:32 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:15.932 10:38:32 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:15.932 10:38:32 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:15.932 10:38:32 -- common/autotest_common.sh@196 -- # cat 00:08:15.932 10:38:32 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:15.932 10:38:32 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:15.932 10:38:32 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:15.932 10:38:32 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:15.932 10:38:32 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:15.932 10:38:32 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:15.932 10:38:32 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:15.932 10:38:32 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.932 10:38:32 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.932 10:38:32 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.932 10:38:32 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.932 10:38:32 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:15.932 10:38:32 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:15.932 10:38:32 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:15.932 10:38:32 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:15.932 10:38:32 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:15.932 10:38:32 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:15.932 10:38:32 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:15.932 10:38:32 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:15.932 10:38:32 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:15.932 10:38:32 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:15.932 10:38:32 -- common/autotest_common.sh@249 -- # valgrind= 00:08:15.932 10:38:32 -- common/autotest_common.sh@255 -- # uname -s 00:08:15.932 10:38:32 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:15.932 10:38:32 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:15.932 10:38:32 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:15.932 10:38:32 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:15.932 10:38:32 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:15.932 10:38:32 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:15.932 10:38:32 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:15.932 10:38:32 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:08:15.933 10:38:32 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:15.933 10:38:32 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:15.933 10:38:32 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:15.933 10:38:32 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:15.933 10:38:32 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:15.933 10:38:32 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:15.933 10:38:32 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:15.933 10:38:32 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:15.933 10:38:32 -- common/autotest_common.sh@309 -- # [[ -z 3355512 ]] 00:08:15.933 10:38:32 -- common/autotest_common.sh@309 -- # kill -0 3355512 00:08:15.933 10:38:32 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:15.933 10:38:32 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:15.933 10:38:32 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:15.933 10:38:32 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:15.933 10:38:32 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:15.933 10:38:32 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:15.933 10:38:32 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:15.933 10:38:32 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:15.933 10:38:32 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.RYxNoa 00:08:15.933 10:38:32 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:15.933 10:38:32 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:15.933 10:38:32 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:15.933 10:38:32 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.RYxNoa/tests/target /tmp/spdk.RYxNoa 00:08:15.933 10:38:32 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:15.933 10:38:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.933 10:38:32 -- common/autotest_common.sh@318 -- # df -T 00:08:15.933 10:38:32 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:15.933 10:38:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:15.933 10:38:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=953643008 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:15.933 10:38:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=4330786816 00:08:15.933 10:38:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=53601591296 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994737664 00:08:15.933 10:38:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=8393146368 00:08:15.933 10:38:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=30943850496 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997368832 00:08:15.933 10:38:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:08:15.933 10:38:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390187008 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398948352 00:08:15.933 10:38:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=8761344 00:08:15.933 10:38:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996455424 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997368832 00:08:15.933 10:38:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=913408 00:08:15.933 10:38:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.933 10:38:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199468032 00:08:15.933 10:38:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199472128 00:08:15.933 10:38:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:15.933 10:38:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.933 10:38:32 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:15.933 * Looking for test storage... 00:08:15.933 10:38:32 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:15.933 10:38:32 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:15.933 10:38:32 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.933 10:38:32 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:15.933 10:38:32 -- common/autotest_common.sh@363 -- # mount=/ 00:08:15.933 10:38:32 -- common/autotest_common.sh@365 -- # target_space=53601591296 00:08:15.933 10:38:32 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:15.933 10:38:32 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:15.933 10:38:32 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:15.933 10:38:32 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:15.933 10:38:32 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:15.933 10:38:32 -- common/autotest_common.sh@372 -- # new_size=10607738880 00:08:15.933 10:38:32 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:15.933 10:38:32 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.933 10:38:32 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.933 10:38:32 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.933 10:38:32 -- common/autotest_common.sh@380 -- # return 0 00:08:15.933 10:38:32 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:15.933 10:38:32 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:15.933 10:38:32 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:15.933 10:38:32 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:15.933 10:38:32 -- common/autotest_common.sh@1672 -- # true 00:08:15.933 10:38:32 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:15.933 10:38:32 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:15.933 10:38:32 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:15.933 10:38:32 -- common/autotest_common.sh@27 -- # exec 00:08:15.933 10:38:32 -- common/autotest_common.sh@29 -- # exec 00:08:15.933 10:38:32 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:15.933 10:38:32 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:15.933 10:38:32 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:15.933 10:38:32 -- common/autotest_common.sh@18 -- # set -x 00:08:15.933 10:38:32 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.933 10:38:32 -- nvmf/common.sh@7 -- # uname -s 00:08:15.933 10:38:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.933 10:38:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.933 10:38:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.933 10:38:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.933 10:38:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.933 10:38:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.933 10:38:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.933 10:38:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.933 10:38:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.933 10:38:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.933 10:38:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.933 10:38:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.933 10:38:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.933 10:38:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.933 10:38:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.933 10:38:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.933 10:38:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.933 10:38:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.933 10:38:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.933 10:38:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.933 10:38:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.933 10:38:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.933 10:38:32 -- paths/export.sh@5 -- # export PATH 00:08:15.934 10:38:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.934 10:38:32 -- nvmf/common.sh@46 -- # : 0 00:08:15.934 10:38:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:15.934 10:38:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:15.934 10:38:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:15.934 10:38:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.934 10:38:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.934 10:38:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:15.934 10:38:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:15.934 10:38:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:15.934 10:38:32 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:15.934 10:38:32 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:15.934 10:38:32 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:15.934 10:38:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:15.934 10:38:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.934 10:38:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:15.934 10:38:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:15.934 10:38:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:15.934 10:38:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.934 10:38:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.934 10:38:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.934 10:38:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:15.934 10:38:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:15.934 10:38:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:15.934 10:38:32 -- common/autotest_common.sh@10 -- # set +x 00:08:18.463 10:38:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:18.463 10:38:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:18.463 10:38:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:18.463 10:38:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:18.463 10:38:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:18.463 10:38:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:18.463 10:38:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:18.463 10:38:34 -- nvmf/common.sh@294 -- # net_devs=() 00:08:18.463 10:38:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:18.463 10:38:34 -- nvmf/common.sh@295 -- # e810=() 00:08:18.463 10:38:34 -- nvmf/common.sh@295 -- # local -ga e810 00:08:18.463 10:38:34 -- nvmf/common.sh@296 -- # x722=() 00:08:18.463 10:38:34 -- nvmf/common.sh@296 -- # local -ga x722 00:08:18.463 10:38:34 -- nvmf/common.sh@297 -- # mlx=() 00:08:18.463 10:38:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:18.463 10:38:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.463 10:38:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.463 10:38:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.463 10:38:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.463 10:38:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.463 10:38:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.463 10:38:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.463 10:38:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.463 10:38:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.463 10:38:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.463 10:38:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.463 10:38:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:18.463 10:38:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:18.463 10:38:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:18.463 10:38:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:18.463 10:38:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:18.463 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:18.463 10:38:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:18.463 10:38:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:18.463 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:18.463 10:38:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:18.463 10:38:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:18.463 10:38:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.463 10:38:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:18.463 10:38:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.463 10:38:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:18.463 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:18.463 10:38:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.463 10:38:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:18.463 10:38:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.463 10:38:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:18.463 10:38:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.463 10:38:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:18.463 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:18.463 10:38:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.463 10:38:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:18.463 10:38:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:18.463 10:38:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:18.463 10:38:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.463 10:38:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.463 10:38:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.463 10:38:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:18.463 10:38:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.463 10:38:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.463 10:38:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:18.463 10:38:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.463 10:38:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.463 10:38:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:18.463 10:38:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:18.463 10:38:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.463 10:38:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.463 10:38:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.463 10:38:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.463 10:38:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:18.463 10:38:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.463 10:38:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.463 10:38:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.463 10:38:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:18.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:08:18.463 00:08:18.463 --- 10.0.0.2 ping statistics --- 00:08:18.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.463 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:18.463 10:38:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:08:18.463 00:08:18.463 --- 10.0.0.1 ping statistics --- 00:08:18.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.463 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:08:18.463 10:38:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.463 10:38:34 -- nvmf/common.sh@410 -- # return 0 00:08:18.463 10:38:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:18.463 10:38:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.463 10:38:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:18.463 10:38:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.463 10:38:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:18.463 10:38:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:18.463 10:38:34 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:18.463 10:38:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:18.463 10:38:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.463 10:38:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.463 ************************************ 00:08:18.463 START TEST nvmf_filesystem_no_in_capsule 00:08:18.463 ************************************ 00:08:18.463 10:38:34 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:18.463 10:38:34 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:18.463 10:38:34 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:18.463 10:38:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:18.463 10:38:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:18.463 10:38:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.463 10:38:34 -- nvmf/common.sh@469 -- # nvmfpid=3357136 00:08:18.463 10:38:34 -- nvmf/common.sh@470 -- # waitforlisten 3357136 00:08:18.463 10:38:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.463 10:38:34 -- common/autotest_common.sh@819 -- # '[' -z 3357136 ']' 00:08:18.463 10:38:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.463 10:38:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:18.463 10:38:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.463 10:38:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:18.464 10:38:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.464 [2024-07-10 10:38:34.897630] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:18.464 [2024-07-10 10:38:34.897718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.464 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.464 [2024-07-10 10:38:34.975689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.464 [2024-07-10 10:38:35.074368] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:18.464 [2024-07-10 10:38:35.074548] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.464 [2024-07-10 10:38:35.074580] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.464 [2024-07-10 10:38:35.074594] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.464 [2024-07-10 10:38:35.074658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.464 [2024-07-10 10:38:35.074699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.464 [2024-07-10 10:38:35.074755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.464 [2024-07-10 10:38:35.074758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.396 10:38:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:19.396 10:38:35 -- common/autotest_common.sh@852 -- # return 0 00:08:19.396 10:38:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:19.397 10:38:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:19.397 10:38:35 -- common/autotest_common.sh@10 -- # set +x 00:08:19.397 10:38:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.397 10:38:35 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:19.397 10:38:35 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:19.397 10:38:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.397 10:38:35 -- common/autotest_common.sh@10 -- # set +x 00:08:19.397 [2024-07-10 10:38:35.949152] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.397 10:38:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.397 10:38:35 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:19.397 10:38:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.397 10:38:35 -- common/autotest_common.sh@10 -- # set +x 00:08:19.397 Malloc1 00:08:19.397 10:38:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.397 10:38:36 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.397 10:38:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.397 10:38:36 -- common/autotest_common.sh@10 -- # set +x 00:08:19.397 10:38:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.397 10:38:36 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:19.397 10:38:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.397 10:38:36 -- common/autotest_common.sh@10 -- # set +x 00:08:19.397 10:38:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.397 10:38:36 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.397 10:38:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.397 10:38:36 -- common/autotest_common.sh@10 -- # set +x 00:08:19.397 [2024-07-10 10:38:36.129071] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.397 10:38:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.397 10:38:36 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:19.397 10:38:36 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:19.397 10:38:36 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:19.397 10:38:36 -- common/autotest_common.sh@1359 -- # local bs 00:08:19.397 10:38:36 -- common/autotest_common.sh@1360 -- # local nb 00:08:19.397 10:38:36 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:19.397 10:38:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.397 10:38:36 -- common/autotest_common.sh@10 -- # set +x 00:08:19.397 10:38:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.397 10:38:36 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:19.397 { 00:08:19.397 "name": "Malloc1", 00:08:19.397 "aliases": [ 00:08:19.397 "eacd53d5-8035-407e-bdea-9c049c143f6c" 00:08:19.397 ], 00:08:19.397 "product_name": "Malloc disk", 00:08:19.397 "block_size": 512, 00:08:19.397 "num_blocks": 1048576, 00:08:19.397 "uuid": "eacd53d5-8035-407e-bdea-9c049c143f6c", 00:08:19.397 "assigned_rate_limits": { 00:08:19.397 "rw_ios_per_sec": 0, 00:08:19.397 "rw_mbytes_per_sec": 0, 00:08:19.397 "r_mbytes_per_sec": 0, 00:08:19.397 "w_mbytes_per_sec": 0 00:08:19.397 }, 00:08:19.397 "claimed": true, 00:08:19.397 "claim_type": "exclusive_write", 00:08:19.397 "zoned": false, 00:08:19.397 "supported_io_types": { 00:08:19.397 "read": true, 00:08:19.397 "write": true, 00:08:19.397 "unmap": true, 00:08:19.397 "write_zeroes": true, 00:08:19.397 "flush": true, 00:08:19.397 "reset": true, 00:08:19.397 "compare": false, 00:08:19.397 "compare_and_write": false, 00:08:19.397 "abort": true, 00:08:19.397 "nvme_admin": false, 00:08:19.397 "nvme_io": false 00:08:19.397 }, 00:08:19.397 "memory_domains": [ 00:08:19.397 { 00:08:19.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.397 "dma_device_type": 2 00:08:19.397 } 00:08:19.397 ], 00:08:19.397 "driver_specific": {} 00:08:19.397 } 00:08:19.397 ]' 00:08:19.397 10:38:36 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:19.397 10:38:36 -- common/autotest_common.sh@1362 -- # bs=512 00:08:19.397 10:38:36 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:19.655 10:38:36 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:19.655 10:38:36 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:19.655 10:38:36 -- common/autotest_common.sh@1367 -- # echo 512 00:08:19.655 10:38:36 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:19.655 10:38:36 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:20.219 10:38:36 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:20.219 10:38:36 -- common/autotest_common.sh@1177 -- # local i=0 00:08:20.220 10:38:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:20.220 10:38:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:20.220 10:38:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:22.120 10:38:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:22.120 10:38:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:22.120 10:38:38 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:22.120 10:38:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:22.120 10:38:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:22.120 10:38:38 -- common/autotest_common.sh@1187 -- # return 0 00:08:22.120 10:38:38 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:22.120 10:38:38 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:22.120 10:38:38 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:22.120 10:38:38 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:22.120 10:38:38 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:22.120 10:38:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:22.120 10:38:38 -- setup/common.sh@80 -- # echo 536870912 00:08:22.120 10:38:38 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:22.120 10:38:38 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:22.378 10:38:38 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:22.378 10:38:38 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:22.378 10:38:39 -- target/filesystem.sh@69 -- # partprobe 00:08:22.965 10:38:39 -- target/filesystem.sh@70 -- # sleep 1 00:08:24.335 10:38:40 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:24.335 10:38:40 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:24.335 10:38:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:24.335 10:38:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.335 10:38:40 -- common/autotest_common.sh@10 -- # set +x 00:08:24.335 ************************************ 00:08:24.335 START TEST filesystem_ext4 00:08:24.335 ************************************ 00:08:24.335 10:38:40 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:24.335 10:38:40 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:24.335 10:38:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:24.335 10:38:40 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:24.335 10:38:40 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:24.335 10:38:40 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:24.335 10:38:40 -- common/autotest_common.sh@904 -- # local i=0 00:08:24.335 10:38:40 -- common/autotest_common.sh@905 -- # local force 00:08:24.335 10:38:40 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:24.335 10:38:40 -- common/autotest_common.sh@908 -- # force=-F 00:08:24.335 10:38:40 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:24.335 mke2fs 1.46.5 (30-Dec-2021) 00:08:24.336 Discarding device blocks: 0/522240 done 00:08:24.336 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:24.336 Filesystem UUID: 5ccefc8a-f3a0-4102-9e11-a43c6ae4aa5a 00:08:24.336 Superblock backups stored on blocks: 00:08:24.336 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:24.336 00:08:24.336 Allocating group tables: 0/64 done 00:08:24.336 Writing inode tables: 0/64 done 00:08:24.336 Creating journal (8192 blocks): done 00:08:25.266 Writing superblocks and filesystem accounting information: 0/64 done 00:08:25.266 00:08:25.266 10:38:42 -- common/autotest_common.sh@921 -- # return 0 00:08:25.266 10:38:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:26.197 10:38:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:26.197 10:38:42 -- target/filesystem.sh@25 -- # sync 00:08:26.197 10:38:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:26.197 10:38:42 -- target/filesystem.sh@27 -- # sync 00:08:26.197 10:38:42 -- target/filesystem.sh@29 -- # i=0 00:08:26.197 10:38:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:26.197 10:38:42 -- target/filesystem.sh@37 -- # kill -0 3357136 00:08:26.197 10:38:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:26.197 10:38:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:26.197 10:38:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:26.197 10:38:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:26.197 00:08:26.197 real 0m2.230s 00:08:26.197 user 0m0.017s 00:08:26.197 sys 0m0.055s 00:08:26.197 10:38:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.197 10:38:42 -- common/autotest_common.sh@10 -- # set +x 00:08:26.197 ************************************ 00:08:26.197 END TEST filesystem_ext4 00:08:26.197 ************************************ 00:08:26.197 10:38:43 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:26.197 10:38:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:26.197 10:38:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.197 10:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:26.197 ************************************ 00:08:26.197 START TEST filesystem_btrfs 00:08:26.197 ************************************ 00:08:26.197 10:38:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:26.197 10:38:43 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:26.197 10:38:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:26.197 10:38:43 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:26.197 10:38:43 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:26.198 10:38:43 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:26.198 10:38:43 -- common/autotest_common.sh@904 -- # local i=0 00:08:26.198 10:38:43 -- common/autotest_common.sh@905 -- # local force 00:08:26.198 10:38:43 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:26.198 10:38:43 -- common/autotest_common.sh@910 -- # force=-f 00:08:26.198 10:38:43 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:26.761 btrfs-progs v6.6.2 00:08:26.761 See https://btrfs.readthedocs.io for more information. 00:08:26.761 00:08:26.761 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:26.761 NOTE: several default settings have changed in version 5.15, please make sure 00:08:26.761 this does not affect your deployments: 00:08:26.761 - DUP for metadata (-m dup) 00:08:26.761 - enabled no-holes (-O no-holes) 00:08:26.761 - enabled free-space-tree (-R free-space-tree) 00:08:26.761 00:08:26.761 Label: (null) 00:08:26.761 UUID: e5c13b9a-94c1-4ab7-85d2-6f8215e953aa 00:08:26.761 Node size: 16384 00:08:26.761 Sector size: 4096 00:08:26.761 Filesystem size: 510.00MiB 00:08:26.761 Block group profiles: 00:08:26.761 Data: single 8.00MiB 00:08:26.761 Metadata: DUP 32.00MiB 00:08:26.761 System: DUP 8.00MiB 00:08:26.761 SSD detected: yes 00:08:26.761 Zoned device: no 00:08:26.761 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:26.761 Runtime features: free-space-tree 00:08:26.761 Checksum: crc32c 00:08:26.761 Number of devices: 1 00:08:26.761 Devices: 00:08:26.761 ID SIZE PATH 00:08:26.761 1 510.00MiB /dev/nvme0n1p1 00:08:26.761 00:08:26.761 10:38:43 -- common/autotest_common.sh@921 -- # return 0 00:08:26.761 10:38:43 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:27.019 10:38:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:27.019 10:38:43 -- target/filesystem.sh@25 -- # sync 00:08:27.019 10:38:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:27.019 10:38:43 -- target/filesystem.sh@27 -- # sync 00:08:27.019 10:38:43 -- target/filesystem.sh@29 -- # i=0 00:08:27.019 10:38:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:27.019 10:38:43 -- target/filesystem.sh@37 -- # kill -0 3357136 00:08:27.019 10:38:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:27.019 10:38:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:27.019 10:38:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:27.019 10:38:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:27.019 00:08:27.019 real 0m0.659s 00:08:27.019 user 0m0.015s 00:08:27.019 sys 0m0.115s 00:08:27.019 10:38:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.019 10:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:27.019 ************************************ 00:08:27.019 END TEST filesystem_btrfs 00:08:27.019 ************************************ 00:08:27.019 10:38:43 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:27.019 10:38:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:27.019 10:38:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:27.019 10:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:27.019 ************************************ 00:08:27.019 START TEST filesystem_xfs 00:08:27.019 ************************************ 00:08:27.019 10:38:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:27.019 10:38:43 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:27.019 10:38:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.019 10:38:43 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:27.019 10:38:43 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:27.019 10:38:43 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:27.019 10:38:43 -- common/autotest_common.sh@904 -- # local i=0 00:08:27.019 10:38:43 -- common/autotest_common.sh@905 -- # local force 00:08:27.019 10:38:43 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:27.019 10:38:43 -- common/autotest_common.sh@910 -- # force=-f 00:08:27.019 10:38:43 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:27.019 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:27.019 = sectsz=512 attr=2, projid32bit=1 00:08:27.019 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:27.020 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:27.020 data = bsize=4096 blocks=130560, imaxpct=25 00:08:27.020 = sunit=0 swidth=0 blks 00:08:27.020 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:27.020 log =internal log bsize=4096 blocks=16384, version=2 00:08:27.020 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:27.020 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:28.003 Discarding blocks...Done. 00:08:28.003 10:38:44 -- common/autotest_common.sh@921 -- # return 0 00:08:28.003 10:38:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:30.529 10:38:47 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:30.529 10:38:47 -- target/filesystem.sh@25 -- # sync 00:08:30.529 10:38:47 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:30.529 10:38:47 -- target/filesystem.sh@27 -- # sync 00:08:30.529 10:38:47 -- target/filesystem.sh@29 -- # i=0 00:08:30.529 10:38:47 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:30.529 10:38:47 -- target/filesystem.sh@37 -- # kill -0 3357136 00:08:30.529 10:38:47 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:30.529 10:38:47 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:30.529 10:38:47 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:30.529 10:38:47 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:30.529 00:08:30.529 real 0m3.433s 00:08:30.529 user 0m0.022s 00:08:30.529 sys 0m0.052s 00:08:30.529 10:38:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.529 10:38:47 -- common/autotest_common.sh@10 -- # set +x 00:08:30.529 ************************************ 00:08:30.529 END TEST filesystem_xfs 00:08:30.529 ************************************ 00:08:30.529 10:38:47 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:30.787 10:38:47 -- target/filesystem.sh@93 -- # sync 00:08:30.787 10:38:47 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.787 10:38:47 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.787 10:38:47 -- common/autotest_common.sh@1198 -- # local i=0 00:08:30.787 10:38:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:30.787 10:38:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.787 10:38:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:30.787 10:38:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.787 10:38:47 -- common/autotest_common.sh@1210 -- # return 0 00:08:30.787 10:38:47 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.787 10:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.787 10:38:47 -- common/autotest_common.sh@10 -- # set +x 00:08:30.787 10:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.787 10:38:47 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:30.787 10:38:47 -- target/filesystem.sh@101 -- # killprocess 3357136 00:08:30.787 10:38:47 -- common/autotest_common.sh@926 -- # '[' -z 3357136 ']' 00:08:30.787 10:38:47 -- common/autotest_common.sh@930 -- # kill -0 3357136 00:08:30.787 10:38:47 -- common/autotest_common.sh@931 -- # uname 00:08:30.787 10:38:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:30.787 10:38:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3357136 00:08:30.787 10:38:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:30.787 10:38:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:30.787 10:38:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3357136' 00:08:30.787 killing process with pid 3357136 00:08:30.787 10:38:47 -- common/autotest_common.sh@945 -- # kill 3357136 00:08:30.787 10:38:47 -- common/autotest_common.sh@950 -- # wait 3357136 00:08:31.353 10:38:47 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:31.353 00:08:31.353 real 0m13.123s 00:08:31.353 user 0m50.626s 00:08:31.353 sys 0m1.886s 00:08:31.353 10:38:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.353 10:38:47 -- common/autotest_common.sh@10 -- # set +x 00:08:31.353 ************************************ 00:08:31.353 END TEST nvmf_filesystem_no_in_capsule 00:08:31.353 ************************************ 00:08:31.353 10:38:47 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:31.353 10:38:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:31.353 10:38:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.353 10:38:47 -- common/autotest_common.sh@10 -- # set +x 00:08:31.353 ************************************ 00:08:31.353 START TEST nvmf_filesystem_in_capsule 00:08:31.353 ************************************ 00:08:31.353 10:38:47 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:31.353 10:38:48 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:31.353 10:38:48 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:31.353 10:38:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:31.353 10:38:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:31.353 10:38:48 -- common/autotest_common.sh@10 -- # set +x 00:08:31.353 10:38:48 -- nvmf/common.sh@469 -- # nvmfpid=3358886 00:08:31.353 10:38:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.353 10:38:48 -- nvmf/common.sh@470 -- # waitforlisten 3358886 00:08:31.353 10:38:48 -- common/autotest_common.sh@819 -- # '[' -z 3358886 ']' 00:08:31.353 10:38:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.353 10:38:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:31.353 10:38:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.353 10:38:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:31.353 10:38:48 -- common/autotest_common.sh@10 -- # set +x 00:08:31.353 [2024-07-10 10:38:48.052195] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:31.353 [2024-07-10 10:38:48.052302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.353 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.353 [2024-07-10 10:38:48.119910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.612 [2024-07-10 10:38:48.212920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:31.612 [2024-07-10 10:38:48.213094] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.612 [2024-07-10 10:38:48.213114] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.612 [2024-07-10 10:38:48.213129] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.612 [2024-07-10 10:38:48.213190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.612 [2024-07-10 10:38:48.213254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.612 [2024-07-10 10:38:48.213348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.612 [2024-07-10 10:38:48.213351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.545 10:38:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:32.545 10:38:49 -- common/autotest_common.sh@852 -- # return 0 00:08:32.545 10:38:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:32.545 10:38:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:32.545 10:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:32.545 10:38:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.545 10:38:49 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:32.545 10:38:49 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:32.545 10:38:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.545 10:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:32.545 [2024-07-10 10:38:49.033057] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.545 10:38:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.545 10:38:49 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:32.545 10:38:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.545 10:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:32.545 Malloc1 00:08:32.545 10:38:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.545 10:38:49 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:32.545 10:38:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.545 10:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:32.545 10:38:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.545 10:38:49 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:32.545 10:38:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.545 10:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:32.545 10:38:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.545 10:38:49 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.545 10:38:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.545 10:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:32.545 [2024-07-10 10:38:49.221191] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.545 10:38:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.545 10:38:49 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:32.545 10:38:49 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:32.545 10:38:49 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:32.545 10:38:49 -- common/autotest_common.sh@1359 -- # local bs 00:08:32.545 10:38:49 -- common/autotest_common.sh@1360 -- # local nb 00:08:32.545 10:38:49 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:32.545 10:38:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.545 10:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:32.545 10:38:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.545 10:38:49 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:32.545 { 00:08:32.545 "name": "Malloc1", 00:08:32.545 "aliases": [ 00:08:32.545 "755618ff-1323-4809-bb6d-238eab345bde" 00:08:32.545 ], 00:08:32.545 "product_name": "Malloc disk", 00:08:32.545 "block_size": 512, 00:08:32.545 "num_blocks": 1048576, 00:08:32.545 "uuid": "755618ff-1323-4809-bb6d-238eab345bde", 00:08:32.545 "assigned_rate_limits": { 00:08:32.545 "rw_ios_per_sec": 0, 00:08:32.545 "rw_mbytes_per_sec": 0, 00:08:32.545 "r_mbytes_per_sec": 0, 00:08:32.545 "w_mbytes_per_sec": 0 00:08:32.545 }, 00:08:32.545 "claimed": true, 00:08:32.545 "claim_type": "exclusive_write", 00:08:32.545 "zoned": false, 00:08:32.545 "supported_io_types": { 00:08:32.545 "read": true, 00:08:32.545 "write": true, 00:08:32.545 "unmap": true, 00:08:32.545 "write_zeroes": true, 00:08:32.546 "flush": true, 00:08:32.546 "reset": true, 00:08:32.546 "compare": false, 00:08:32.546 "compare_and_write": false, 00:08:32.546 "abort": true, 00:08:32.546 "nvme_admin": false, 00:08:32.546 "nvme_io": false 00:08:32.546 }, 00:08:32.546 "memory_domains": [ 00:08:32.546 { 00:08:32.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.546 "dma_device_type": 2 00:08:32.546 } 00:08:32.546 ], 00:08:32.546 "driver_specific": {} 00:08:32.546 } 00:08:32.546 ]' 00:08:32.546 10:38:49 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:32.546 10:38:49 -- common/autotest_common.sh@1362 -- # bs=512 00:08:32.546 10:38:49 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:32.546 10:38:49 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:32.546 10:38:49 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:32.546 10:38:49 -- common/autotest_common.sh@1367 -- # echo 512 00:08:32.546 10:38:49 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:32.546 10:38:49 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:33.111 10:38:49 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:33.111 10:38:49 -- common/autotest_common.sh@1177 -- # local i=0 00:08:33.111 10:38:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:33.111 10:38:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:33.111 10:38:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:35.637 10:38:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:35.637 10:38:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:35.637 10:38:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:35.637 10:38:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:35.637 10:38:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:35.637 10:38:51 -- common/autotest_common.sh@1187 -- # return 0 00:08:35.637 10:38:51 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:35.637 10:38:51 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:35.637 10:38:51 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:35.637 10:38:51 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:35.637 10:38:51 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:35.637 10:38:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:35.637 10:38:51 -- setup/common.sh@80 -- # echo 536870912 00:08:35.637 10:38:51 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:35.637 10:38:51 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:35.637 10:38:51 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:35.637 10:38:51 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:35.637 10:38:52 -- target/filesystem.sh@69 -- # partprobe 00:08:35.894 10:38:52 -- target/filesystem.sh@70 -- # sleep 1 00:08:37.266 10:38:53 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:37.266 10:38:53 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:37.266 10:38:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:37.266 10:38:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.266 10:38:53 -- common/autotest_common.sh@10 -- # set +x 00:08:37.266 ************************************ 00:08:37.266 START TEST filesystem_in_capsule_ext4 00:08:37.266 ************************************ 00:08:37.266 10:38:53 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:37.266 10:38:53 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:37.266 10:38:53 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:37.266 10:38:53 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:37.266 10:38:53 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:37.266 10:38:53 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:37.266 10:38:53 -- common/autotest_common.sh@904 -- # local i=0 00:08:37.266 10:38:53 -- common/autotest_common.sh@905 -- # local force 00:08:37.266 10:38:53 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:37.266 10:38:53 -- common/autotest_common.sh@908 -- # force=-F 00:08:37.266 10:38:53 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:37.266 mke2fs 1.46.5 (30-Dec-2021) 00:08:37.266 Discarding device blocks: 0/522240 done 00:08:37.266 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:37.266 Filesystem UUID: 280c283f-b035-45fe-ba5d-9c2dbf706054 00:08:37.266 Superblock backups stored on blocks: 00:08:37.266 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:37.266 00:08:37.266 Allocating group tables: 0/64 done 00:08:37.266 Writing inode tables: 0/64 done 00:08:37.266 Creating journal (8192 blocks): done 00:08:37.266 Writing superblocks and filesystem accounting information: 0/64 done 00:08:37.266 00:08:37.266 10:38:53 -- common/autotest_common.sh@921 -- # return 0 00:08:37.266 10:38:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:37.524 10:38:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:37.525 10:38:54 -- target/filesystem.sh@25 -- # sync 00:08:37.525 10:38:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:37.525 10:38:54 -- target/filesystem.sh@27 -- # sync 00:08:37.525 10:38:54 -- target/filesystem.sh@29 -- # i=0 00:08:37.525 10:38:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:37.525 10:38:54 -- target/filesystem.sh@37 -- # kill -0 3358886 00:08:37.525 10:38:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:37.525 10:38:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:37.525 10:38:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:37.525 10:38:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:37.525 00:08:37.525 real 0m0.524s 00:08:37.525 user 0m0.022s 00:08:37.525 sys 0m0.054s 00:08:37.525 10:38:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.525 10:38:54 -- common/autotest_common.sh@10 -- # set +x 00:08:37.525 ************************************ 00:08:37.525 END TEST filesystem_in_capsule_ext4 00:08:37.525 ************************************ 00:08:37.525 10:38:54 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:37.525 10:38:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:37.525 10:38:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.525 10:38:54 -- common/autotest_common.sh@10 -- # set +x 00:08:37.525 ************************************ 00:08:37.525 START TEST filesystem_in_capsule_btrfs 00:08:37.525 ************************************ 00:08:37.525 10:38:54 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:37.525 10:38:54 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:37.525 10:38:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:37.525 10:38:54 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:37.525 10:38:54 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:37.525 10:38:54 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:37.525 10:38:54 -- common/autotest_common.sh@904 -- # local i=0 00:08:37.525 10:38:54 -- common/autotest_common.sh@905 -- # local force 00:08:37.525 10:38:54 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:37.525 10:38:54 -- common/autotest_common.sh@910 -- # force=-f 00:08:37.525 10:38:54 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:37.783 btrfs-progs v6.6.2 00:08:37.783 See https://btrfs.readthedocs.io for more information. 00:08:37.783 00:08:37.783 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:37.783 NOTE: several default settings have changed in version 5.15, please make sure 00:08:37.783 this does not affect your deployments: 00:08:37.783 - DUP for metadata (-m dup) 00:08:37.783 - enabled no-holes (-O no-holes) 00:08:37.783 - enabled free-space-tree (-R free-space-tree) 00:08:37.783 00:08:37.783 Label: (null) 00:08:37.783 UUID: f21cd281-ab88-4db7-af8a-6b0578f3b54c 00:08:37.783 Node size: 16384 00:08:37.783 Sector size: 4096 00:08:37.783 Filesystem size: 510.00MiB 00:08:37.783 Block group profiles: 00:08:37.783 Data: single 8.00MiB 00:08:37.783 Metadata: DUP 32.00MiB 00:08:37.783 System: DUP 8.00MiB 00:08:37.783 SSD detected: yes 00:08:37.783 Zoned device: no 00:08:37.783 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:37.783 Runtime features: free-space-tree 00:08:37.783 Checksum: crc32c 00:08:37.783 Number of devices: 1 00:08:37.783 Devices: 00:08:37.783 ID SIZE PATH 00:08:37.783 1 510.00MiB /dev/nvme0n1p1 00:08:37.783 00:08:37.783 10:38:54 -- common/autotest_common.sh@921 -- # return 0 00:08:37.783 10:38:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:38.716 10:38:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:38.716 10:38:55 -- target/filesystem.sh@25 -- # sync 00:08:38.716 10:38:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:38.716 10:38:55 -- target/filesystem.sh@27 -- # sync 00:08:38.716 10:38:55 -- target/filesystem.sh@29 -- # i=0 00:08:38.716 10:38:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:38.716 10:38:55 -- target/filesystem.sh@37 -- # kill -0 3358886 00:08:38.716 10:38:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:38.716 10:38:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:38.716 10:38:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:38.716 10:38:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:38.716 00:08:38.716 real 0m1.106s 00:08:38.716 user 0m0.018s 00:08:38.716 sys 0m0.117s 00:08:38.716 10:38:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.716 10:38:55 -- common/autotest_common.sh@10 -- # set +x 00:08:38.716 ************************************ 00:08:38.716 END TEST filesystem_in_capsule_btrfs 00:08:38.716 ************************************ 00:08:38.716 10:38:55 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:38.716 10:38:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:38.716 10:38:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.716 10:38:55 -- common/autotest_common.sh@10 -- # set +x 00:08:38.716 ************************************ 00:08:38.716 START TEST filesystem_in_capsule_xfs 00:08:38.716 ************************************ 00:08:38.716 10:38:55 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:38.716 10:38:55 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:38.716 10:38:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.716 10:38:55 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:38.716 10:38:55 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:38.716 10:38:55 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:38.716 10:38:55 -- common/autotest_common.sh@904 -- # local i=0 00:08:38.716 10:38:55 -- common/autotest_common.sh@905 -- # local force 00:08:38.716 10:38:55 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:38.716 10:38:55 -- common/autotest_common.sh@910 -- # force=-f 00:08:38.716 10:38:55 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:38.716 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:38.716 = sectsz=512 attr=2, projid32bit=1 00:08:38.716 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:38.716 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:38.716 data = bsize=4096 blocks=130560, imaxpct=25 00:08:38.716 = sunit=0 swidth=0 blks 00:08:38.716 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:38.716 log =internal log bsize=4096 blocks=16384, version=2 00:08:38.716 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:38.716 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:39.648 Discarding blocks...Done. 00:08:39.648 10:38:56 -- common/autotest_common.sh@921 -- # return 0 00:08:39.648 10:38:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:42.174 10:38:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:42.174 10:38:58 -- target/filesystem.sh@25 -- # sync 00:08:42.174 10:38:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:42.174 10:38:58 -- target/filesystem.sh@27 -- # sync 00:08:42.174 10:38:58 -- target/filesystem.sh@29 -- # i=0 00:08:42.174 10:38:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:42.174 10:38:58 -- target/filesystem.sh@37 -- # kill -0 3358886 00:08:42.174 10:38:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:42.174 10:38:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:42.174 10:38:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:42.174 10:38:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:42.174 00:08:42.174 real 0m3.049s 00:08:42.174 user 0m0.018s 00:08:42.174 sys 0m0.059s 00:08:42.174 10:38:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.174 10:38:58 -- common/autotest_common.sh@10 -- # set +x 00:08:42.174 ************************************ 00:08:42.174 END TEST filesystem_in_capsule_xfs 00:08:42.174 ************************************ 00:08:42.174 10:38:58 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:42.174 10:38:58 -- target/filesystem.sh@93 -- # sync 00:08:42.174 10:38:58 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.174 10:38:58 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.174 10:38:58 -- common/autotest_common.sh@1198 -- # local i=0 00:08:42.174 10:38:58 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:42.174 10:38:58 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.174 10:38:58 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:42.174 10:38:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.174 10:38:58 -- common/autotest_common.sh@1210 -- # return 0 00:08:42.174 10:38:58 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.174 10:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:42.174 10:38:58 -- common/autotest_common.sh@10 -- # set +x 00:08:42.174 10:38:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:42.174 10:38:58 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:42.174 10:38:58 -- target/filesystem.sh@101 -- # killprocess 3358886 00:08:42.174 10:38:58 -- common/autotest_common.sh@926 -- # '[' -z 3358886 ']' 00:08:42.174 10:38:58 -- common/autotest_common.sh@930 -- # kill -0 3358886 00:08:42.174 10:38:58 -- common/autotest_common.sh@931 -- # uname 00:08:42.174 10:38:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:42.174 10:38:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3358886 00:08:42.174 10:38:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:42.174 10:38:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:42.174 10:38:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3358886' 00:08:42.174 killing process with pid 3358886 00:08:42.174 10:38:58 -- common/autotest_common.sh@945 -- # kill 3358886 00:08:42.174 10:38:58 -- common/autotest_common.sh@950 -- # wait 3358886 00:08:42.433 10:38:59 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:42.433 00:08:42.433 real 0m11.127s 00:08:42.433 user 0m42.810s 00:08:42.433 sys 0m1.733s 00:08:42.433 10:38:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.433 10:38:59 -- common/autotest_common.sh@10 -- # set +x 00:08:42.433 ************************************ 00:08:42.433 END TEST nvmf_filesystem_in_capsule 00:08:42.433 ************************************ 00:08:42.433 10:38:59 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:42.433 10:38:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:42.433 10:38:59 -- nvmf/common.sh@116 -- # sync 00:08:42.433 10:38:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:42.433 10:38:59 -- nvmf/common.sh@119 -- # set +e 00:08:42.433 10:38:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:42.433 10:38:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:42.433 rmmod nvme_tcp 00:08:42.433 rmmod nvme_fabrics 00:08:42.433 rmmod nvme_keyring 00:08:42.433 10:38:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:42.433 10:38:59 -- nvmf/common.sh@123 -- # set -e 00:08:42.433 10:38:59 -- nvmf/common.sh@124 -- # return 0 00:08:42.433 10:38:59 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:42.433 10:38:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:42.433 10:38:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:42.433 10:38:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:42.433 10:38:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.433 10:38:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:42.433 10:38:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.433 10:38:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.433 10:38:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.967 10:39:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:44.967 00:08:44.967 real 0m28.728s 00:08:44.967 user 1m34.351s 00:08:44.967 sys 0m5.192s 00:08:44.967 10:39:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.967 10:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:44.967 ************************************ 00:08:44.967 END TEST nvmf_filesystem 00:08:44.967 ************************************ 00:08:44.967 10:39:01 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:44.967 10:39:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:44.967 10:39:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.967 10:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:44.967 ************************************ 00:08:44.967 START TEST nvmf_discovery 00:08:44.967 ************************************ 00:08:44.967 10:39:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:44.967 * Looking for test storage... 00:08:44.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.967 10:39:01 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.967 10:39:01 -- nvmf/common.sh@7 -- # uname -s 00:08:44.967 10:39:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.967 10:39:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.967 10:39:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.967 10:39:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.967 10:39:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.967 10:39:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.967 10:39:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.967 10:39:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.967 10:39:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.967 10:39:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.967 10:39:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.967 10:39:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.967 10:39:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.967 10:39:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.967 10:39:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.967 10:39:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.967 10:39:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.967 10:39:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.967 10:39:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.967 10:39:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.968 10:39:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.968 10:39:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.968 10:39:01 -- paths/export.sh@5 -- # export PATH 00:08:44.968 10:39:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.968 10:39:01 -- nvmf/common.sh@46 -- # : 0 00:08:44.968 10:39:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:44.968 10:39:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:44.968 10:39:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:44.968 10:39:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.968 10:39:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.968 10:39:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:44.968 10:39:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:44.968 10:39:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:44.968 10:39:01 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:44.968 10:39:01 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:44.968 10:39:01 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:44.968 10:39:01 -- target/discovery.sh@15 -- # hash nvme 00:08:44.968 10:39:01 -- target/discovery.sh@20 -- # nvmftestinit 00:08:44.968 10:39:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:44.968 10:39:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.968 10:39:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:44.968 10:39:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:44.968 10:39:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:44.968 10:39:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.968 10:39:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.968 10:39:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.968 10:39:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:44.968 10:39:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:44.968 10:39:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:44.968 10:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:46.867 10:39:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:46.867 10:39:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:46.867 10:39:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:46.867 10:39:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:46.867 10:39:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:46.867 10:39:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:46.867 10:39:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:46.867 10:39:03 -- nvmf/common.sh@294 -- # net_devs=() 00:08:46.867 10:39:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:46.867 10:39:03 -- nvmf/common.sh@295 -- # e810=() 00:08:46.867 10:39:03 -- nvmf/common.sh@295 -- # local -ga e810 00:08:46.867 10:39:03 -- nvmf/common.sh@296 -- # x722=() 00:08:46.867 10:39:03 -- nvmf/common.sh@296 -- # local -ga x722 00:08:46.867 10:39:03 -- nvmf/common.sh@297 -- # mlx=() 00:08:46.867 10:39:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:46.867 10:39:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.867 10:39:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.867 10:39:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.867 10:39:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.867 10:39:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.867 10:39:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.867 10:39:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.867 10:39:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.867 10:39:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.867 10:39:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.867 10:39:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.867 10:39:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:46.867 10:39:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:46.867 10:39:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:46.867 10:39:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.867 10:39:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:46.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:46.867 10:39:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.867 10:39:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:46.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:46.867 10:39:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:46.867 10:39:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.867 10:39:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.867 10:39:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.867 10:39:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.867 10:39:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:46.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:46.867 10:39:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.867 10:39:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.867 10:39:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.867 10:39:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.867 10:39:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.867 10:39:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:46.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:46.867 10:39:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.867 10:39:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:46.867 10:39:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:46.867 10:39:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:46.867 10:39:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.867 10:39:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.867 10:39:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.867 10:39:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:46.867 10:39:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.867 10:39:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.867 10:39:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:46.867 10:39:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.867 10:39:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.867 10:39:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:46.867 10:39:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:46.867 10:39:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.867 10:39:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.867 10:39:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.867 10:39:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.867 10:39:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:46.867 10:39:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.867 10:39:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.867 10:39:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.867 10:39:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:46.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:08:46.867 00:08:46.867 --- 10.0.0.2 ping statistics --- 00:08:46.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.867 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:08:46.867 10:39:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:08:46.867 00:08:46.867 --- 10.0.0.1 ping statistics --- 00:08:46.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.867 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:46.867 10:39:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.867 10:39:03 -- nvmf/common.sh@410 -- # return 0 00:08:46.867 10:39:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:46.867 10:39:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.867 10:39:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:46.867 10:39:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.867 10:39:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:46.867 10:39:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:46.867 10:39:03 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:46.867 10:39:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:46.867 10:39:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.867 10:39:03 -- common/autotest_common.sh@10 -- # set +x 00:08:46.867 10:39:03 -- nvmf/common.sh@469 -- # nvmfpid=3362514 00:08:46.867 10:39:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.867 10:39:03 -- nvmf/common.sh@470 -- # waitforlisten 3362514 00:08:46.867 10:39:03 -- common/autotest_common.sh@819 -- # '[' -z 3362514 ']' 00:08:46.867 10:39:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.867 10:39:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:46.867 10:39:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.867 10:39:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:46.867 10:39:03 -- common/autotest_common.sh@10 -- # set +x 00:08:46.867 [2024-07-10 10:39:03.560572] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:46.867 [2024-07-10 10:39:03.560649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.867 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.867 [2024-07-10 10:39:03.625820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.125 [2024-07-10 10:39:03.714018] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:47.125 [2024-07-10 10:39:03.714153] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.125 [2024-07-10 10:39:03.714169] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.125 [2024-07-10 10:39:03.714181] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.125 [2024-07-10 10:39:03.714246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.125 [2024-07-10 10:39:03.714306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.125 [2024-07-10 10:39:03.714372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.125 [2024-07-10 10:39:03.714375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.059 10:39:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:48.059 10:39:04 -- common/autotest_common.sh@852 -- # return 0 00:08:48.059 10:39:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:48.059 10:39:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.059 10:39:04 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 [2024-07-10 10:39:04.550108] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@26 -- # seq 1 4 00:08:48.059 10:39:04 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:48.059 10:39:04 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 Null1 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 [2024-07-10 10:39:04.590361] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:48.059 10:39:04 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 Null2 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:48.059 10:39:04 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 Null3 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:48.059 10:39:04 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 Null4 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:48.059 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.059 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.059 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.059 10:39:04 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:48.059 00:08:48.059 Discovery Log Number of Records 6, Generation counter 6 00:08:48.059 =====Discovery Log Entry 0====== 00:08:48.059 trtype: tcp 00:08:48.060 adrfam: ipv4 00:08:48.060 subtype: current discovery subsystem 00:08:48.060 treq: not required 00:08:48.060 portid: 0 00:08:48.060 trsvcid: 4420 00:08:48.060 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:48.060 traddr: 10.0.0.2 00:08:48.060 eflags: explicit discovery connections, duplicate discovery information 00:08:48.060 sectype: none 00:08:48.060 =====Discovery Log Entry 1====== 00:08:48.060 trtype: tcp 00:08:48.060 adrfam: ipv4 00:08:48.060 subtype: nvme subsystem 00:08:48.060 treq: not required 00:08:48.060 portid: 0 00:08:48.060 trsvcid: 4420 00:08:48.060 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:48.060 traddr: 10.0.0.2 00:08:48.060 eflags: none 00:08:48.060 sectype: none 00:08:48.060 =====Discovery Log Entry 2====== 00:08:48.060 trtype: tcp 00:08:48.060 adrfam: ipv4 00:08:48.060 subtype: nvme subsystem 00:08:48.060 treq: not required 00:08:48.060 portid: 0 00:08:48.060 trsvcid: 4420 00:08:48.060 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:48.060 traddr: 10.0.0.2 00:08:48.060 eflags: none 00:08:48.060 sectype: none 00:08:48.060 =====Discovery Log Entry 3====== 00:08:48.060 trtype: tcp 00:08:48.060 adrfam: ipv4 00:08:48.060 subtype: nvme subsystem 00:08:48.060 treq: not required 00:08:48.060 portid: 0 00:08:48.060 trsvcid: 4420 00:08:48.060 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:48.060 traddr: 10.0.0.2 00:08:48.060 eflags: none 00:08:48.060 sectype: none 00:08:48.060 =====Discovery Log Entry 4====== 00:08:48.060 trtype: tcp 00:08:48.060 adrfam: ipv4 00:08:48.060 subtype: nvme subsystem 00:08:48.060 treq: not required 00:08:48.060 portid: 0 00:08:48.060 trsvcid: 4420 00:08:48.060 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:48.060 traddr: 10.0.0.2 00:08:48.060 eflags: none 00:08:48.060 sectype: none 00:08:48.060 =====Discovery Log Entry 5====== 00:08:48.060 trtype: tcp 00:08:48.060 adrfam: ipv4 00:08:48.060 subtype: discovery subsystem referral 00:08:48.060 treq: not required 00:08:48.060 portid: 0 00:08:48.060 trsvcid: 4430 00:08:48.060 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:48.060 traddr: 10.0.0.2 00:08:48.060 eflags: none 00:08:48.060 sectype: none 00:08:48.060 10:39:04 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:48.060 Perform nvmf subsystem discovery via RPC 00:08:48.060 10:39:04 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:48.060 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.060 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.060 [2024-07-10 10:39:04.794924] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:48.060 [ 00:08:48.060 { 00:08:48.060 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:48.060 "subtype": "Discovery", 00:08:48.060 "listen_addresses": [ 00:08:48.060 { 00:08:48.060 "transport": "TCP", 00:08:48.060 "trtype": "TCP", 00:08:48.060 "adrfam": "IPv4", 00:08:48.060 "traddr": "10.0.0.2", 00:08:48.060 "trsvcid": "4420" 00:08:48.060 } 00:08:48.060 ], 00:08:48.060 "allow_any_host": true, 00:08:48.060 "hosts": [] 00:08:48.060 }, 00:08:48.060 { 00:08:48.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.060 "subtype": "NVMe", 00:08:48.060 "listen_addresses": [ 00:08:48.060 { 00:08:48.060 "transport": "TCP", 00:08:48.060 "trtype": "TCP", 00:08:48.060 "adrfam": "IPv4", 00:08:48.060 "traddr": "10.0.0.2", 00:08:48.060 "trsvcid": "4420" 00:08:48.060 } 00:08:48.060 ], 00:08:48.060 "allow_any_host": true, 00:08:48.060 "hosts": [], 00:08:48.060 "serial_number": "SPDK00000000000001", 00:08:48.060 "model_number": "SPDK bdev Controller", 00:08:48.060 "max_namespaces": 32, 00:08:48.060 "min_cntlid": 1, 00:08:48.060 "max_cntlid": 65519, 00:08:48.060 "namespaces": [ 00:08:48.060 { 00:08:48.060 "nsid": 1, 00:08:48.060 "bdev_name": "Null1", 00:08:48.060 "name": "Null1", 00:08:48.060 "nguid": "C1493B3DB8A44E73B7FE074F51D769B3", 00:08:48.060 "uuid": "c1493b3d-b8a4-4e73-b7fe-074f51d769b3" 00:08:48.060 } 00:08:48.060 ] 00:08:48.060 }, 00:08:48.060 { 00:08:48.060 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:48.060 "subtype": "NVMe", 00:08:48.060 "listen_addresses": [ 00:08:48.060 { 00:08:48.060 "transport": "TCP", 00:08:48.060 "trtype": "TCP", 00:08:48.060 "adrfam": "IPv4", 00:08:48.060 "traddr": "10.0.0.2", 00:08:48.060 "trsvcid": "4420" 00:08:48.060 } 00:08:48.060 ], 00:08:48.060 "allow_any_host": true, 00:08:48.060 "hosts": [], 00:08:48.060 "serial_number": "SPDK00000000000002", 00:08:48.060 "model_number": "SPDK bdev Controller", 00:08:48.060 "max_namespaces": 32, 00:08:48.060 "min_cntlid": 1, 00:08:48.060 "max_cntlid": 65519, 00:08:48.060 "namespaces": [ 00:08:48.060 { 00:08:48.060 "nsid": 1, 00:08:48.060 "bdev_name": "Null2", 00:08:48.060 "name": "Null2", 00:08:48.060 "nguid": "673361CEA65A491FB1B87BA187C1350E", 00:08:48.060 "uuid": "673361ce-a65a-491f-b1b8-7ba187c1350e" 00:08:48.060 } 00:08:48.060 ] 00:08:48.060 }, 00:08:48.060 { 00:08:48.060 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:48.060 "subtype": "NVMe", 00:08:48.060 "listen_addresses": [ 00:08:48.060 { 00:08:48.060 "transport": "TCP", 00:08:48.060 "trtype": "TCP", 00:08:48.060 "adrfam": "IPv4", 00:08:48.060 "traddr": "10.0.0.2", 00:08:48.060 "trsvcid": "4420" 00:08:48.060 } 00:08:48.060 ], 00:08:48.060 "allow_any_host": true, 00:08:48.060 "hosts": [], 00:08:48.060 "serial_number": "SPDK00000000000003", 00:08:48.060 "model_number": "SPDK bdev Controller", 00:08:48.060 "max_namespaces": 32, 00:08:48.060 "min_cntlid": 1, 00:08:48.060 "max_cntlid": 65519, 00:08:48.060 "namespaces": [ 00:08:48.060 { 00:08:48.060 "nsid": 1, 00:08:48.060 "bdev_name": "Null3", 00:08:48.060 "name": "Null3", 00:08:48.060 "nguid": "B477261E7E5C40DD96BA0CCE21468CD5", 00:08:48.060 "uuid": "b477261e-7e5c-40dd-96ba-0cce21468cd5" 00:08:48.060 } 00:08:48.060 ] 00:08:48.060 }, 00:08:48.060 { 00:08:48.060 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:48.060 "subtype": "NVMe", 00:08:48.060 "listen_addresses": [ 00:08:48.060 { 00:08:48.060 "transport": "TCP", 00:08:48.060 "trtype": "TCP", 00:08:48.060 "adrfam": "IPv4", 00:08:48.060 "traddr": "10.0.0.2", 00:08:48.060 "trsvcid": "4420" 00:08:48.060 } 00:08:48.060 ], 00:08:48.060 "allow_any_host": true, 00:08:48.060 "hosts": [], 00:08:48.060 "serial_number": "SPDK00000000000004", 00:08:48.060 "model_number": "SPDK bdev Controller", 00:08:48.060 "max_namespaces": 32, 00:08:48.060 "min_cntlid": 1, 00:08:48.060 "max_cntlid": 65519, 00:08:48.060 "namespaces": [ 00:08:48.060 { 00:08:48.060 "nsid": 1, 00:08:48.060 "bdev_name": "Null4", 00:08:48.060 "name": "Null4", 00:08:48.060 "nguid": "2E43C0FBCF45444E87C6CCC1A8B95A26", 00:08:48.060 "uuid": "2e43c0fb-cf45-444e-87c6-ccc1a8b95a26" 00:08:48.060 } 00:08:48.060 ] 00:08:48.060 } 00:08:48.060 ] 00:08:48.060 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.060 10:39:04 -- target/discovery.sh@42 -- # seq 1 4 00:08:48.060 10:39:04 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.060 10:39:04 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.060 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.060 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.060 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.060 10:39:04 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:48.060 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.060 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.060 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.060 10:39:04 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.060 10:39:04 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:48.060 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.060 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.060 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.060 10:39:04 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:48.060 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.060 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.060 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.060 10:39:04 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.060 10:39:04 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:48.060 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.060 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.060 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.060 10:39:04 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:48.060 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.060 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.060 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.060 10:39:04 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.060 10:39:04 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:48.060 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.060 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.060 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.060 10:39:04 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:48.060 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.060 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.060 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.060 10:39:04 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:48.060 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.060 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.319 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.319 10:39:04 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:48.319 10:39:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.319 10:39:04 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:48.319 10:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:48.319 10:39:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.319 10:39:04 -- target/discovery.sh@49 -- # check_bdevs= 00:08:48.319 10:39:04 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:48.319 10:39:04 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:48.319 10:39:04 -- target/discovery.sh@57 -- # nvmftestfini 00:08:48.319 10:39:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:48.319 10:39:04 -- nvmf/common.sh@116 -- # sync 00:08:48.319 10:39:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:48.319 10:39:04 -- nvmf/common.sh@119 -- # set +e 00:08:48.319 10:39:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:48.319 10:39:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:48.319 rmmod nvme_tcp 00:08:48.319 rmmod nvme_fabrics 00:08:48.319 rmmod nvme_keyring 00:08:48.319 10:39:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:48.319 10:39:04 -- nvmf/common.sh@123 -- # set -e 00:08:48.319 10:39:04 -- nvmf/common.sh@124 -- # return 0 00:08:48.319 10:39:04 -- nvmf/common.sh@477 -- # '[' -n 3362514 ']' 00:08:48.319 10:39:04 -- nvmf/common.sh@478 -- # killprocess 3362514 00:08:48.319 10:39:04 -- common/autotest_common.sh@926 -- # '[' -z 3362514 ']' 00:08:48.319 10:39:04 -- common/autotest_common.sh@930 -- # kill -0 3362514 00:08:48.319 10:39:04 -- common/autotest_common.sh@931 -- # uname 00:08:48.319 10:39:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:48.319 10:39:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3362514 00:08:48.319 10:39:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:48.319 10:39:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:48.319 10:39:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3362514' 00:08:48.319 killing process with pid 3362514 00:08:48.319 10:39:05 -- common/autotest_common.sh@945 -- # kill 3362514 00:08:48.319 [2024-07-10 10:39:05.019042] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:48.319 10:39:05 -- common/autotest_common.sh@950 -- # wait 3362514 00:08:48.577 10:39:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:48.577 10:39:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:48.577 10:39:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:48.577 10:39:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.577 10:39:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:48.577 10:39:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.577 10:39:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.577 10:39:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.485 10:39:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:50.485 00:08:50.485 real 0m6.022s 00:08:50.485 user 0m7.071s 00:08:50.485 sys 0m1.888s 00:08:50.485 10:39:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.485 10:39:07 -- common/autotest_common.sh@10 -- # set +x 00:08:50.485 ************************************ 00:08:50.485 END TEST nvmf_discovery 00:08:50.485 ************************************ 00:08:50.744 10:39:07 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:50.744 10:39:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:50.744 10:39:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.744 10:39:07 -- common/autotest_common.sh@10 -- # set +x 00:08:50.744 ************************************ 00:08:50.744 START TEST nvmf_referrals 00:08:50.744 ************************************ 00:08:50.744 10:39:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:50.744 * Looking for test storage... 00:08:50.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.744 10:39:07 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.744 10:39:07 -- nvmf/common.sh@7 -- # uname -s 00:08:50.744 10:39:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.744 10:39:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.744 10:39:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.744 10:39:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.744 10:39:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.744 10:39:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.744 10:39:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.744 10:39:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.744 10:39:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.744 10:39:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.744 10:39:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.744 10:39:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.744 10:39:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.744 10:39:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.744 10:39:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.744 10:39:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.744 10:39:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.744 10:39:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.744 10:39:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.744 10:39:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.744 10:39:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.744 10:39:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.745 10:39:07 -- paths/export.sh@5 -- # export PATH 00:08:50.745 10:39:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.745 10:39:07 -- nvmf/common.sh@46 -- # : 0 00:08:50.745 10:39:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:50.745 10:39:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:50.745 10:39:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:50.745 10:39:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.745 10:39:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.745 10:39:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:50.745 10:39:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:50.745 10:39:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:50.745 10:39:07 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:50.745 10:39:07 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:50.745 10:39:07 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:50.745 10:39:07 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:50.745 10:39:07 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:50.745 10:39:07 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:50.745 10:39:07 -- target/referrals.sh@37 -- # nvmftestinit 00:08:50.745 10:39:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:50.745 10:39:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.745 10:39:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:50.745 10:39:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:50.745 10:39:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:50.745 10:39:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.745 10:39:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.745 10:39:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.745 10:39:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:50.745 10:39:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:50.745 10:39:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:50.745 10:39:07 -- common/autotest_common.sh@10 -- # set +x 00:08:52.648 10:39:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:52.648 10:39:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:52.648 10:39:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:52.648 10:39:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:52.648 10:39:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:52.648 10:39:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:52.648 10:39:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:52.648 10:39:09 -- nvmf/common.sh@294 -- # net_devs=() 00:08:52.648 10:39:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:52.648 10:39:09 -- nvmf/common.sh@295 -- # e810=() 00:08:52.648 10:39:09 -- nvmf/common.sh@295 -- # local -ga e810 00:08:52.648 10:39:09 -- nvmf/common.sh@296 -- # x722=() 00:08:52.648 10:39:09 -- nvmf/common.sh@296 -- # local -ga x722 00:08:52.648 10:39:09 -- nvmf/common.sh@297 -- # mlx=() 00:08:52.648 10:39:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:52.648 10:39:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.648 10:39:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.648 10:39:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.648 10:39:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.648 10:39:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.648 10:39:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.648 10:39:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.648 10:39:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.648 10:39:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.648 10:39:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.648 10:39:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.648 10:39:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:52.648 10:39:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:52.648 10:39:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:52.648 10:39:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:52.648 10:39:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:52.648 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:52.648 10:39:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:52.648 10:39:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:52.648 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:52.648 10:39:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:52.648 10:39:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:52.648 10:39:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.648 10:39:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:52.648 10:39:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.648 10:39:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:52.648 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:52.648 10:39:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.648 10:39:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:52.648 10:39:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.648 10:39:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:52.648 10:39:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.648 10:39:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:52.648 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:52.648 10:39:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.648 10:39:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:52.648 10:39:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:52.648 10:39:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:52.648 10:39:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.648 10:39:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.648 10:39:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.648 10:39:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:52.648 10:39:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.648 10:39:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.648 10:39:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:52.648 10:39:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.648 10:39:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.648 10:39:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:52.648 10:39:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:52.648 10:39:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.648 10:39:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.648 10:39:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.648 10:39:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.648 10:39:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:52.648 10:39:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.648 10:39:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.648 10:39:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.648 10:39:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:52.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:08:52.648 00:08:52.648 --- 10.0.0.2 ping statistics --- 00:08:52.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.648 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:08:52.648 10:39:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:08:52.648 00:08:52.648 --- 10.0.0.1 ping statistics --- 00:08:52.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.648 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:52.648 10:39:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.648 10:39:09 -- nvmf/common.sh@410 -- # return 0 00:08:52.648 10:39:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:52.648 10:39:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.648 10:39:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:52.648 10:39:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.648 10:39:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:52.649 10:39:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:52.649 10:39:09 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:52.649 10:39:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:52.649 10:39:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:52.649 10:39:09 -- common/autotest_common.sh@10 -- # set +x 00:08:52.649 10:39:09 -- nvmf/common.sh@469 -- # nvmfpid=3365137 00:08:52.649 10:39:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.649 10:39:09 -- nvmf/common.sh@470 -- # waitforlisten 3365137 00:08:52.649 10:39:09 -- common/autotest_common.sh@819 -- # '[' -z 3365137 ']' 00:08:52.649 10:39:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.649 10:39:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:52.649 10:39:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.649 10:39:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:52.649 10:39:09 -- common/autotest_common.sh@10 -- # set +x 00:08:52.907 [2024-07-10 10:39:09.494093] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:52.907 [2024-07-10 10:39:09.494171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.907 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.907 [2024-07-10 10:39:09.558907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.907 [2024-07-10 10:39:09.647806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:52.907 [2024-07-10 10:39:09.647947] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.907 [2024-07-10 10:39:09.647964] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.907 [2024-07-10 10:39:09.647976] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.907 [2024-07-10 10:39:09.648051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.907 [2024-07-10 10:39:09.648112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.907 [2024-07-10 10:39:09.648114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.907 [2024-07-10 10:39:09.648085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.840 10:39:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:53.840 10:39:10 -- common/autotest_common.sh@852 -- # return 0 00:08:53.840 10:39:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:53.840 10:39:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:53.840 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:53.840 10:39:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.840 10:39:10 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.840 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.840 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:53.840 [2024-07-10 10:39:10.519238] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.840 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.840 10:39:10 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:53.840 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.840 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:53.840 [2024-07-10 10:39:10.531449] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:53.840 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.840 10:39:10 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:53.840 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.840 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:53.840 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.840 10:39:10 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:53.840 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.840 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:53.840 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.840 10:39:10 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:53.840 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.840 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:53.840 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.840 10:39:10 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:53.840 10:39:10 -- target/referrals.sh@48 -- # jq length 00:08:53.840 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.840 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:53.840 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.840 10:39:10 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:53.840 10:39:10 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:53.840 10:39:10 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:53.840 10:39:10 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:53.840 10:39:10 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:53.840 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.840 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:53.840 10:39:10 -- target/referrals.sh@21 -- # sort 00:08:53.840 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.840 10:39:10 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:53.840 10:39:10 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:53.840 10:39:10 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:53.840 10:39:10 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:53.840 10:39:10 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:53.840 10:39:10 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:53.840 10:39:10 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:53.840 10:39:10 -- target/referrals.sh@26 -- # sort 00:08:54.098 10:39:10 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:54.098 10:39:10 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:54.098 10:39:10 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:54.098 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.098 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:54.098 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.098 10:39:10 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:54.098 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.098 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:54.098 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.098 10:39:10 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:54.098 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.098 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:54.098 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.098 10:39:10 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:54.098 10:39:10 -- target/referrals.sh@56 -- # jq length 00:08:54.098 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.098 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:54.098 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.098 10:39:10 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:54.099 10:39:10 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:54.099 10:39:10 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:54.099 10:39:10 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:54.099 10:39:10 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.099 10:39:10 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:54.099 10:39:10 -- target/referrals.sh@26 -- # sort 00:08:54.356 10:39:10 -- target/referrals.sh@26 -- # echo 00:08:54.356 10:39:10 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:54.356 10:39:10 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:54.356 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.356 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:54.356 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.356 10:39:10 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:54.356 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.356 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:54.356 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.356 10:39:10 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:54.356 10:39:10 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:54.356 10:39:10 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:54.356 10:39:10 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:54.356 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.356 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:54.356 10:39:10 -- target/referrals.sh@21 -- # sort 00:08:54.357 10:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.357 10:39:11 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:54.357 10:39:11 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:54.357 10:39:11 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:54.357 10:39:11 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:54.357 10:39:11 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:54.357 10:39:11 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.357 10:39:11 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:54.357 10:39:11 -- target/referrals.sh@26 -- # sort 00:08:54.615 10:39:11 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:54.615 10:39:11 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:54.615 10:39:11 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:54.615 10:39:11 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:54.615 10:39:11 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:54.615 10:39:11 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.615 10:39:11 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:54.615 10:39:11 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:54.615 10:39:11 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:54.615 10:39:11 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:54.615 10:39:11 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:54.615 10:39:11 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.615 10:39:11 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:54.873 10:39:11 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:54.873 10:39:11 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:54.873 10:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.873 10:39:11 -- common/autotest_common.sh@10 -- # set +x 00:08:54.873 10:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.873 10:39:11 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:54.873 10:39:11 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:54.873 10:39:11 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:54.873 10:39:11 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:54.873 10:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.873 10:39:11 -- common/autotest_common.sh@10 -- # set +x 00:08:54.873 10:39:11 -- target/referrals.sh@21 -- # sort 00:08:54.873 10:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.873 10:39:11 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:54.873 10:39:11 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:54.873 10:39:11 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:54.873 10:39:11 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:54.873 10:39:11 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:54.874 10:39:11 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.874 10:39:11 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:54.874 10:39:11 -- target/referrals.sh@26 -- # sort 00:08:54.874 10:39:11 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:54.874 10:39:11 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:54.874 10:39:11 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:54.874 10:39:11 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:54.874 10:39:11 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:54.874 10:39:11 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.874 10:39:11 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:55.131 10:39:11 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:55.131 10:39:11 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:55.131 10:39:11 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:55.131 10:39:11 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:55.131 10:39:11 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:55.131 10:39:11 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:55.389 10:39:11 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:55.389 10:39:11 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:55.389 10:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.389 10:39:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.389 10:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.389 10:39:11 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:55.389 10:39:11 -- target/referrals.sh@82 -- # jq length 00:08:55.389 10:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.389 10:39:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.389 10:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.389 10:39:12 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:55.389 10:39:12 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:55.389 10:39:12 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:55.389 10:39:12 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:55.389 10:39:12 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:55.389 10:39:12 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:55.389 10:39:12 -- target/referrals.sh@26 -- # sort 00:08:55.389 10:39:12 -- target/referrals.sh@26 -- # echo 00:08:55.389 10:39:12 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:55.389 10:39:12 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:55.389 10:39:12 -- target/referrals.sh@86 -- # nvmftestfini 00:08:55.389 10:39:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:55.389 10:39:12 -- nvmf/common.sh@116 -- # sync 00:08:55.389 10:39:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:55.389 10:39:12 -- nvmf/common.sh@119 -- # set +e 00:08:55.389 10:39:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:55.389 10:39:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:55.389 rmmod nvme_tcp 00:08:55.389 rmmod nvme_fabrics 00:08:55.389 rmmod nvme_keyring 00:08:55.389 10:39:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:55.389 10:39:12 -- nvmf/common.sh@123 -- # set -e 00:08:55.389 10:39:12 -- nvmf/common.sh@124 -- # return 0 00:08:55.389 10:39:12 -- nvmf/common.sh@477 -- # '[' -n 3365137 ']' 00:08:55.389 10:39:12 -- nvmf/common.sh@478 -- # killprocess 3365137 00:08:55.389 10:39:12 -- common/autotest_common.sh@926 -- # '[' -z 3365137 ']' 00:08:55.389 10:39:12 -- common/autotest_common.sh@930 -- # kill -0 3365137 00:08:55.389 10:39:12 -- common/autotest_common.sh@931 -- # uname 00:08:55.389 10:39:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:55.389 10:39:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3365137 00:08:55.648 10:39:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:55.648 10:39:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:55.648 10:39:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3365137' 00:08:55.648 killing process with pid 3365137 00:08:55.648 10:39:12 -- common/autotest_common.sh@945 -- # kill 3365137 00:08:55.648 10:39:12 -- common/autotest_common.sh@950 -- # wait 3365137 00:08:55.648 10:39:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:55.648 10:39:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:55.648 10:39:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:55.648 10:39:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:55.907 10:39:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:55.907 10:39:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.907 10:39:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.907 10:39:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.809 10:39:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:57.809 00:08:57.809 real 0m7.199s 00:08:57.809 user 0m12.716s 00:08:57.809 sys 0m2.132s 00:08:57.809 10:39:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.809 10:39:14 -- common/autotest_common.sh@10 -- # set +x 00:08:57.809 ************************************ 00:08:57.809 END TEST nvmf_referrals 00:08:57.809 ************************************ 00:08:57.809 10:39:14 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:57.809 10:39:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:57.809 10:39:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.809 10:39:14 -- common/autotest_common.sh@10 -- # set +x 00:08:57.809 ************************************ 00:08:57.809 START TEST nvmf_connect_disconnect 00:08:57.809 ************************************ 00:08:57.809 10:39:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:57.809 * Looking for test storage... 00:08:57.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.809 10:39:14 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.809 10:39:14 -- nvmf/common.sh@7 -- # uname -s 00:08:57.809 10:39:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.809 10:39:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.809 10:39:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.809 10:39:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.809 10:39:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.809 10:39:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.809 10:39:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.809 10:39:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.809 10:39:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.809 10:39:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.809 10:39:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.809 10:39:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.809 10:39:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.809 10:39:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.809 10:39:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.809 10:39:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.809 10:39:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.809 10:39:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.809 10:39:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.809 10:39:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.809 10:39:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.809 10:39:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.809 10:39:14 -- paths/export.sh@5 -- # export PATH 00:08:57.809 10:39:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.809 10:39:14 -- nvmf/common.sh@46 -- # : 0 00:08:57.809 10:39:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:57.809 10:39:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:57.810 10:39:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:57.810 10:39:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.810 10:39:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.810 10:39:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:57.810 10:39:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:57.810 10:39:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:57.810 10:39:14 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:57.810 10:39:14 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:57.810 10:39:14 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:57.810 10:39:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:57.810 10:39:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.810 10:39:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:57.810 10:39:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:57.810 10:39:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:57.810 10:39:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.810 10:39:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.810 10:39:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.810 10:39:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:57.810 10:39:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:57.810 10:39:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:57.810 10:39:14 -- common/autotest_common.sh@10 -- # set +x 00:09:00.331 10:39:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:00.331 10:39:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:00.331 10:39:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:00.331 10:39:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:00.331 10:39:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:00.331 10:39:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:00.331 10:39:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:00.331 10:39:16 -- nvmf/common.sh@294 -- # net_devs=() 00:09:00.331 10:39:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:00.331 10:39:16 -- nvmf/common.sh@295 -- # e810=() 00:09:00.331 10:39:16 -- nvmf/common.sh@295 -- # local -ga e810 00:09:00.331 10:39:16 -- nvmf/common.sh@296 -- # x722=() 00:09:00.331 10:39:16 -- nvmf/common.sh@296 -- # local -ga x722 00:09:00.331 10:39:16 -- nvmf/common.sh@297 -- # mlx=() 00:09:00.331 10:39:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:00.331 10:39:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.331 10:39:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.331 10:39:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.331 10:39:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.331 10:39:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.331 10:39:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.331 10:39:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.331 10:39:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.331 10:39:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.331 10:39:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.331 10:39:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.331 10:39:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:00.331 10:39:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:00.331 10:39:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:00.331 10:39:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:00.331 10:39:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:00.331 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:00.331 10:39:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:00.331 10:39:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:00.331 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:00.331 10:39:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:00.331 10:39:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:00.331 10:39:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:00.332 10:39:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:00.332 10:39:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:00.332 10:39:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.332 10:39:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:00.332 10:39:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.332 10:39:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:00.332 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:00.332 10:39:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.332 10:39:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:00.332 10:39:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.332 10:39:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:00.332 10:39:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.332 10:39:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:00.332 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:00.332 10:39:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.332 10:39:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:00.332 10:39:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:00.332 10:39:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:00.332 10:39:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:00.332 10:39:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:00.332 10:39:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.332 10:39:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.332 10:39:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.332 10:39:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:00.332 10:39:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.332 10:39:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.332 10:39:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:00.332 10:39:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.332 10:39:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.332 10:39:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:00.332 10:39:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:00.332 10:39:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.332 10:39:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.332 10:39:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.332 10:39:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.332 10:39:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:00.332 10:39:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.332 10:39:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.332 10:39:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.332 10:39:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:00.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:09:00.332 00:09:00.332 --- 10.0.0.2 ping statistics --- 00:09:00.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.332 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:09:00.332 10:39:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:09:00.332 00:09:00.332 --- 10.0.0.1 ping statistics --- 00:09:00.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.332 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:09:00.332 10:39:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.332 10:39:16 -- nvmf/common.sh@410 -- # return 0 00:09:00.332 10:39:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:00.332 10:39:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.332 10:39:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:00.332 10:39:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:00.332 10:39:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.332 10:39:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:00.332 10:39:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:00.332 10:39:16 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:00.332 10:39:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:00.332 10:39:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:00.332 10:39:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.332 10:39:16 -- nvmf/common.sh@469 -- # nvmfpid=3367584 00:09:00.332 10:39:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.332 10:39:16 -- nvmf/common.sh@470 -- # waitforlisten 3367584 00:09:00.332 10:39:16 -- common/autotest_common.sh@819 -- # '[' -z 3367584 ']' 00:09:00.332 10:39:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.332 10:39:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:00.332 10:39:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.332 10:39:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:00.332 10:39:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.332 [2024-07-10 10:39:16.812970] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:00.332 [2024-07-10 10:39:16.813042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.332 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.332 [2024-07-10 10:39:16.878233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.332 [2024-07-10 10:39:16.969044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:00.332 [2024-07-10 10:39:16.969195] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.332 [2024-07-10 10:39:16.969213] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.332 [2024-07-10 10:39:16.969226] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.332 [2024-07-10 10:39:16.969287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.332 [2024-07-10 10:39:16.969342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.332 [2024-07-10 10:39:16.969557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.332 [2024-07-10 10:39:16.969562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.264 10:39:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:01.264 10:39:17 -- common/autotest_common.sh@852 -- # return 0 00:09:01.264 10:39:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:01.264 10:39:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:01.264 10:39:17 -- common/autotest_common.sh@10 -- # set +x 00:09:01.264 10:39:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.264 10:39:17 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:01.264 10:39:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.264 10:39:17 -- common/autotest_common.sh@10 -- # set +x 00:09:01.264 [2024-07-10 10:39:17.828066] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.264 10:39:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.264 10:39:17 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:01.264 10:39:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.264 10:39:17 -- common/autotest_common.sh@10 -- # set +x 00:09:01.264 10:39:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.264 10:39:17 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:01.264 10:39:17 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.264 10:39:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.264 10:39:17 -- common/autotest_common.sh@10 -- # set +x 00:09:01.264 10:39:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.264 10:39:17 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.264 10:39:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.264 10:39:17 -- common/autotest_common.sh@10 -- # set +x 00:09:01.264 10:39:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.264 10:39:17 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.264 10:39:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.264 10:39:17 -- common/autotest_common.sh@10 -- # set +x 00:09:01.264 [2024-07-10 10:39:17.879313] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.264 10:39:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.264 10:39:17 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:01.264 10:39:17 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:01.264 10:39:17 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:01.264 10:39:17 -- target/connect_disconnect.sh@34 -- # set +x 00:09:03.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.618 10:43:08 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:51.618 10:43:08 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:51.618 10:43:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:51.618 10:43:08 -- nvmf/common.sh@116 -- # sync 00:12:51.618 10:43:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:51.618 10:43:08 -- nvmf/common.sh@119 -- # set +e 00:12:51.618 10:43:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:51.618 10:43:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:51.618 rmmod nvme_tcp 00:12:51.618 rmmod nvme_fabrics 00:12:51.618 rmmod nvme_keyring 00:12:51.618 10:43:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:51.618 10:43:08 -- nvmf/common.sh@123 -- # set -e 00:12:51.618 10:43:08 -- nvmf/common.sh@124 -- # return 0 00:12:51.618 10:43:08 -- nvmf/common.sh@477 -- # '[' -n 3367584 ']' 00:12:51.618 10:43:08 -- nvmf/common.sh@478 -- # killprocess 3367584 00:12:51.618 10:43:08 -- common/autotest_common.sh@926 -- # '[' -z 3367584 ']' 00:12:51.618 10:43:08 -- common/autotest_common.sh@930 -- # kill -0 3367584 00:12:51.618 10:43:08 -- common/autotest_common.sh@931 -- # uname 00:12:51.618 10:43:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:51.618 10:43:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3367584 00:12:51.618 10:43:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:51.618 10:43:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:51.618 10:43:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3367584' 00:12:51.618 killing process with pid 3367584 00:12:51.618 10:43:08 -- common/autotest_common.sh@945 -- # kill 3367584 00:12:51.618 10:43:08 -- common/autotest_common.sh@950 -- # wait 3367584 00:12:51.618 10:43:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:51.618 10:43:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:51.618 10:43:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:51.618 10:43:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.618 10:43:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:51.618 10:43:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.618 10:43:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.618 10:43:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.149 10:43:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:54.149 00:12:54.149 real 3m55.841s 00:12:54.149 user 14m57.773s 00:12:54.149 sys 0m35.252s 00:12:54.149 10:43:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.149 10:43:10 -- common/autotest_common.sh@10 -- # set +x 00:12:54.149 ************************************ 00:12:54.149 END TEST nvmf_connect_disconnect 00:12:54.149 ************************************ 00:12:54.149 10:43:10 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:54.149 10:43:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:54.149 10:43:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:54.149 10:43:10 -- common/autotest_common.sh@10 -- # set +x 00:12:54.149 ************************************ 00:12:54.149 START TEST nvmf_multitarget 00:12:54.149 ************************************ 00:12:54.149 10:43:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:54.149 * Looking for test storage... 00:12:54.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.149 10:43:10 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.149 10:43:10 -- nvmf/common.sh@7 -- # uname -s 00:12:54.149 10:43:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.149 10:43:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.149 10:43:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.149 10:43:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.149 10:43:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.149 10:43:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.149 10:43:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.149 10:43:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.149 10:43:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.149 10:43:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.149 10:43:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.149 10:43:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.149 10:43:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.149 10:43:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.149 10:43:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.149 10:43:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.149 10:43:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.149 10:43:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.149 10:43:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.149 10:43:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.150 10:43:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.150 10:43:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.150 10:43:10 -- paths/export.sh@5 -- # export PATH 00:12:54.150 10:43:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.150 10:43:10 -- nvmf/common.sh@46 -- # : 0 00:12:54.150 10:43:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:54.150 10:43:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:54.150 10:43:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:54.150 10:43:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.150 10:43:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.150 10:43:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:54.150 10:43:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:54.150 10:43:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:54.150 10:43:10 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:54.150 10:43:10 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:54.150 10:43:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:54.150 10:43:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.150 10:43:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:54.150 10:43:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:54.150 10:43:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:54.150 10:43:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.150 10:43:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.150 10:43:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.150 10:43:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:54.150 10:43:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:54.150 10:43:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:54.150 10:43:10 -- common/autotest_common.sh@10 -- # set +x 00:12:56.053 10:43:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:56.053 10:43:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:56.053 10:43:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:56.053 10:43:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:56.053 10:43:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:56.053 10:43:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:56.053 10:43:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:56.053 10:43:12 -- nvmf/common.sh@294 -- # net_devs=() 00:12:56.053 10:43:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:56.053 10:43:12 -- nvmf/common.sh@295 -- # e810=() 00:12:56.053 10:43:12 -- nvmf/common.sh@295 -- # local -ga e810 00:12:56.053 10:43:12 -- nvmf/common.sh@296 -- # x722=() 00:12:56.053 10:43:12 -- nvmf/common.sh@296 -- # local -ga x722 00:12:56.053 10:43:12 -- nvmf/common.sh@297 -- # mlx=() 00:12:56.053 10:43:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:56.053 10:43:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.053 10:43:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.053 10:43:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.053 10:43:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.053 10:43:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.053 10:43:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.053 10:43:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.053 10:43:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.053 10:43:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.053 10:43:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.053 10:43:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.053 10:43:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:56.053 10:43:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:56.053 10:43:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:56.053 10:43:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:56.053 10:43:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:56.053 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:56.053 10:43:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:56.053 10:43:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:56.053 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:56.053 10:43:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:56.053 10:43:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:56.053 10:43:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.053 10:43:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:56.053 10:43:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.053 10:43:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:56.053 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:56.053 10:43:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.053 10:43:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:56.053 10:43:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.053 10:43:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:56.053 10:43:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.053 10:43:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:56.053 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:56.053 10:43:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.053 10:43:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:56.053 10:43:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:56.053 10:43:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:56.053 10:43:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.053 10:43:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.053 10:43:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.053 10:43:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:56.053 10:43:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.053 10:43:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.053 10:43:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:56.053 10:43:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.053 10:43:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.053 10:43:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:56.053 10:43:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:56.053 10:43:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.053 10:43:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.053 10:43:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.053 10:43:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.053 10:43:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:56.053 10:43:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.053 10:43:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.053 10:43:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.053 10:43:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:56.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:12:56.053 00:12:56.053 --- 10.0.0.2 ping statistics --- 00:12:56.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.053 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:12:56.053 10:43:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:12:56.053 00:12:56.053 --- 10.0.0.1 ping statistics --- 00:12:56.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.053 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:12:56.053 10:43:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.053 10:43:12 -- nvmf/common.sh@410 -- # return 0 00:12:56.053 10:43:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:56.053 10:43:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.053 10:43:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:56.053 10:43:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.053 10:43:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:56.053 10:43:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:56.053 10:43:12 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:56.053 10:43:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:56.053 10:43:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:56.053 10:43:12 -- common/autotest_common.sh@10 -- # set +x 00:12:56.053 10:43:12 -- nvmf/common.sh@469 -- # nvmfpid=3399503 00:12:56.053 10:43:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.053 10:43:12 -- nvmf/common.sh@470 -- # waitforlisten 3399503 00:12:56.053 10:43:12 -- common/autotest_common.sh@819 -- # '[' -z 3399503 ']' 00:12:56.053 10:43:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.053 10:43:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:56.053 10:43:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.053 10:43:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:56.053 10:43:12 -- common/autotest_common.sh@10 -- # set +x 00:12:56.053 [2024-07-10 10:43:12.747571] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:56.053 [2024-07-10 10:43:12.747664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.053 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.053 [2024-07-10 10:43:12.817937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.322 [2024-07-10 10:43:12.913686] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:56.322 [2024-07-10 10:43:12.913855] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.322 [2024-07-10 10:43:12.913877] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.322 [2024-07-10 10:43:12.913892] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.322 [2024-07-10 10:43:12.913945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.322 [2024-07-10 10:43:12.914002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.322 [2024-07-10 10:43:12.914035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.322 [2024-07-10 10:43:12.914038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.892 10:43:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:56.892 10:43:13 -- common/autotest_common.sh@852 -- # return 0 00:12:56.892 10:43:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:56.892 10:43:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:56.892 10:43:13 -- common/autotest_common.sh@10 -- # set +x 00:12:56.892 10:43:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.892 10:43:13 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:56.892 10:43:13 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:56.892 10:43:13 -- target/multitarget.sh@21 -- # jq length 00:12:57.150 10:43:13 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:57.150 10:43:13 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:57.150 "nvmf_tgt_1" 00:12:57.150 10:43:13 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:57.408 "nvmf_tgt_2" 00:12:57.408 10:43:14 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:57.408 10:43:14 -- target/multitarget.sh@28 -- # jq length 00:12:57.408 10:43:14 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:57.408 10:43:14 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:57.665 true 00:12:57.665 10:43:14 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:57.665 true 00:12:57.665 10:43:14 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:57.665 10:43:14 -- target/multitarget.sh@35 -- # jq length 00:12:57.923 10:43:14 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:57.923 10:43:14 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:57.923 10:43:14 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:57.923 10:43:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:57.923 10:43:14 -- nvmf/common.sh@116 -- # sync 00:12:57.923 10:43:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:57.923 10:43:14 -- nvmf/common.sh@119 -- # set +e 00:12:57.923 10:43:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:57.923 10:43:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:57.923 rmmod nvme_tcp 00:12:57.923 rmmod nvme_fabrics 00:12:57.923 rmmod nvme_keyring 00:12:57.923 10:43:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:57.923 10:43:14 -- nvmf/common.sh@123 -- # set -e 00:12:57.923 10:43:14 -- nvmf/common.sh@124 -- # return 0 00:12:57.923 10:43:14 -- nvmf/common.sh@477 -- # '[' -n 3399503 ']' 00:12:57.923 10:43:14 -- nvmf/common.sh@478 -- # killprocess 3399503 00:12:57.923 10:43:14 -- common/autotest_common.sh@926 -- # '[' -z 3399503 ']' 00:12:57.923 10:43:14 -- common/autotest_common.sh@930 -- # kill -0 3399503 00:12:57.923 10:43:14 -- common/autotest_common.sh@931 -- # uname 00:12:57.923 10:43:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:57.923 10:43:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3399503 00:12:57.923 10:43:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:57.923 10:43:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:57.923 10:43:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3399503' 00:12:57.923 killing process with pid 3399503 00:12:57.923 10:43:14 -- common/autotest_common.sh@945 -- # kill 3399503 00:12:57.923 10:43:14 -- common/autotest_common.sh@950 -- # wait 3399503 00:12:58.182 10:43:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:58.182 10:43:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:58.182 10:43:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:58.182 10:43:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.182 10:43:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:58.182 10:43:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.182 10:43:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.182 10:43:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.085 10:43:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:00.085 00:13:00.085 real 0m6.417s 00:13:00.085 user 0m9.115s 00:13:00.085 sys 0m2.021s 00:13:00.085 10:43:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.085 10:43:16 -- common/autotest_common.sh@10 -- # set +x 00:13:00.085 ************************************ 00:13:00.085 END TEST nvmf_multitarget 00:13:00.085 ************************************ 00:13:00.085 10:43:16 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:00.085 10:43:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:00.085 10:43:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:00.085 10:43:16 -- common/autotest_common.sh@10 -- # set +x 00:13:00.085 ************************************ 00:13:00.085 START TEST nvmf_rpc 00:13:00.085 ************************************ 00:13:00.085 10:43:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:00.085 * Looking for test storage... 00:13:00.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.085 10:43:16 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.343 10:43:16 -- nvmf/common.sh@7 -- # uname -s 00:13:00.343 10:43:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.343 10:43:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.343 10:43:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.343 10:43:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.343 10:43:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.343 10:43:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.343 10:43:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.343 10:43:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.343 10:43:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.343 10:43:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.343 10:43:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.343 10:43:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.343 10:43:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.343 10:43:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.343 10:43:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.343 10:43:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.343 10:43:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.343 10:43:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.343 10:43:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.343 10:43:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.343 10:43:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.343 10:43:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.343 10:43:16 -- paths/export.sh@5 -- # export PATH 00:13:00.343 10:43:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.343 10:43:16 -- nvmf/common.sh@46 -- # : 0 00:13:00.343 10:43:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:00.343 10:43:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:00.343 10:43:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:00.343 10:43:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.343 10:43:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.343 10:43:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:00.343 10:43:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:00.343 10:43:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:00.343 10:43:16 -- target/rpc.sh@11 -- # loops=5 00:13:00.343 10:43:16 -- target/rpc.sh@23 -- # nvmftestinit 00:13:00.343 10:43:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:00.343 10:43:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.343 10:43:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:00.344 10:43:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:00.344 10:43:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:00.344 10:43:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.344 10:43:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.344 10:43:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.344 10:43:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:00.344 10:43:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:00.344 10:43:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:00.344 10:43:16 -- common/autotest_common.sh@10 -- # set +x 00:13:02.244 10:43:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:02.244 10:43:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:02.244 10:43:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:02.244 10:43:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:02.244 10:43:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:02.244 10:43:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:02.244 10:43:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:02.244 10:43:18 -- nvmf/common.sh@294 -- # net_devs=() 00:13:02.244 10:43:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:02.244 10:43:18 -- nvmf/common.sh@295 -- # e810=() 00:13:02.244 10:43:18 -- nvmf/common.sh@295 -- # local -ga e810 00:13:02.244 10:43:18 -- nvmf/common.sh@296 -- # x722=() 00:13:02.244 10:43:18 -- nvmf/common.sh@296 -- # local -ga x722 00:13:02.244 10:43:18 -- nvmf/common.sh@297 -- # mlx=() 00:13:02.244 10:43:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:02.244 10:43:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.244 10:43:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.244 10:43:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.244 10:43:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.244 10:43:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.244 10:43:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.244 10:43:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.244 10:43:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.244 10:43:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.244 10:43:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.244 10:43:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.244 10:43:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:02.244 10:43:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:02.244 10:43:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:02.244 10:43:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:02.244 10:43:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:02.244 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:02.244 10:43:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:02.244 10:43:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:02.244 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:02.244 10:43:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:02.244 10:43:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:02.244 10:43:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.244 10:43:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:02.244 10:43:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.244 10:43:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:02.244 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:02.244 10:43:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.244 10:43:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:02.244 10:43:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.244 10:43:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:02.244 10:43:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.244 10:43:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:02.244 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:02.244 10:43:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.244 10:43:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:02.244 10:43:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:02.244 10:43:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:02.244 10:43:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:02.244 10:43:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.244 10:43:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.244 10:43:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.244 10:43:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:02.244 10:43:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.244 10:43:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.244 10:43:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:02.244 10:43:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.244 10:43:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.244 10:43:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:02.244 10:43:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:02.244 10:43:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.244 10:43:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.244 10:43:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.244 10:43:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.244 10:43:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:02.244 10:43:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.244 10:43:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.244 10:43:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.244 10:43:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:02.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:13:02.244 00:13:02.244 --- 10.0.0.2 ping statistics --- 00:13:02.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.244 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:13:02.244 10:43:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:13:02.244 00:13:02.244 --- 10.0.0.1 ping statistics --- 00:13:02.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.244 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:02.244 10:43:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.244 10:43:19 -- nvmf/common.sh@410 -- # return 0 00:13:02.244 10:43:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:02.244 10:43:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.244 10:43:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:02.244 10:43:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:02.244 10:43:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.244 10:43:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:02.244 10:43:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:02.244 10:43:19 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:02.244 10:43:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:02.244 10:43:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:02.244 10:43:19 -- common/autotest_common.sh@10 -- # set +x 00:13:02.245 10:43:19 -- nvmf/common.sh@469 -- # nvmfpid=3401746 00:13:02.245 10:43:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.245 10:43:19 -- nvmf/common.sh@470 -- # waitforlisten 3401746 00:13:02.245 10:43:19 -- common/autotest_common.sh@819 -- # '[' -z 3401746 ']' 00:13:02.245 10:43:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.245 10:43:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:02.245 10:43:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.245 10:43:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:02.245 10:43:19 -- common/autotest_common.sh@10 -- # set +x 00:13:02.504 [2024-07-10 10:43:19.091803] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:02.504 [2024-07-10 10:43:19.091899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.504 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.504 [2024-07-10 10:43:19.161128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.504 [2024-07-10 10:43:19.252782] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:02.504 [2024-07-10 10:43:19.252955] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.504 [2024-07-10 10:43:19.252976] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.504 [2024-07-10 10:43:19.252991] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.504 [2024-07-10 10:43:19.253074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.504 [2024-07-10 10:43:19.253131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.504 [2024-07-10 10:43:19.253185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.504 [2024-07-10 10:43:19.253187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.439 10:43:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:03.439 10:43:20 -- common/autotest_common.sh@852 -- # return 0 00:13:03.439 10:43:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:03.439 10:43:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:03.439 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:13:03.439 10:43:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.439 10:43:20 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:03.439 10:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.439 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:13:03.439 10:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.439 10:43:20 -- target/rpc.sh@26 -- # stats='{ 00:13:03.439 "tick_rate": 2700000000, 00:13:03.439 "poll_groups": [ 00:13:03.439 { 00:13:03.439 "name": "nvmf_tgt_poll_group_0", 00:13:03.439 "admin_qpairs": 0, 00:13:03.439 "io_qpairs": 0, 00:13:03.439 "current_admin_qpairs": 0, 00:13:03.439 "current_io_qpairs": 0, 00:13:03.439 "pending_bdev_io": 0, 00:13:03.439 "completed_nvme_io": 0, 00:13:03.439 "transports": [] 00:13:03.439 }, 00:13:03.439 { 00:13:03.439 "name": "nvmf_tgt_poll_group_1", 00:13:03.439 "admin_qpairs": 0, 00:13:03.439 "io_qpairs": 0, 00:13:03.439 "current_admin_qpairs": 0, 00:13:03.439 "current_io_qpairs": 0, 00:13:03.439 "pending_bdev_io": 0, 00:13:03.439 "completed_nvme_io": 0, 00:13:03.439 "transports": [] 00:13:03.439 }, 00:13:03.439 { 00:13:03.439 "name": "nvmf_tgt_poll_group_2", 00:13:03.439 "admin_qpairs": 0, 00:13:03.439 "io_qpairs": 0, 00:13:03.439 "current_admin_qpairs": 0, 00:13:03.439 "current_io_qpairs": 0, 00:13:03.439 "pending_bdev_io": 0, 00:13:03.439 "completed_nvme_io": 0, 00:13:03.439 "transports": [] 00:13:03.439 }, 00:13:03.439 { 00:13:03.439 "name": "nvmf_tgt_poll_group_3", 00:13:03.439 "admin_qpairs": 0, 00:13:03.439 "io_qpairs": 0, 00:13:03.439 "current_admin_qpairs": 0, 00:13:03.439 "current_io_qpairs": 0, 00:13:03.439 "pending_bdev_io": 0, 00:13:03.439 "completed_nvme_io": 0, 00:13:03.439 "transports": [] 00:13:03.439 } 00:13:03.439 ] 00:13:03.439 }' 00:13:03.439 10:43:20 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:03.439 10:43:20 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:03.439 10:43:20 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:03.439 10:43:20 -- target/rpc.sh@15 -- # wc -l 00:13:03.439 10:43:20 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:03.439 10:43:20 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:03.439 10:43:20 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:03.439 10:43:20 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.439 10:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.439 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:13:03.439 [2024-07-10 10:43:20.134249] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.439 10:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.439 10:43:20 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:03.439 10:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.439 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:13:03.439 10:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.439 10:43:20 -- target/rpc.sh@33 -- # stats='{ 00:13:03.439 "tick_rate": 2700000000, 00:13:03.439 "poll_groups": [ 00:13:03.439 { 00:13:03.439 "name": "nvmf_tgt_poll_group_0", 00:13:03.439 "admin_qpairs": 0, 00:13:03.439 "io_qpairs": 0, 00:13:03.439 "current_admin_qpairs": 0, 00:13:03.439 "current_io_qpairs": 0, 00:13:03.439 "pending_bdev_io": 0, 00:13:03.439 "completed_nvme_io": 0, 00:13:03.439 "transports": [ 00:13:03.439 { 00:13:03.439 "trtype": "TCP" 00:13:03.439 } 00:13:03.439 ] 00:13:03.439 }, 00:13:03.439 { 00:13:03.439 "name": "nvmf_tgt_poll_group_1", 00:13:03.439 "admin_qpairs": 0, 00:13:03.439 "io_qpairs": 0, 00:13:03.439 "current_admin_qpairs": 0, 00:13:03.439 "current_io_qpairs": 0, 00:13:03.439 "pending_bdev_io": 0, 00:13:03.439 "completed_nvme_io": 0, 00:13:03.439 "transports": [ 00:13:03.439 { 00:13:03.439 "trtype": "TCP" 00:13:03.439 } 00:13:03.439 ] 00:13:03.439 }, 00:13:03.439 { 00:13:03.439 "name": "nvmf_tgt_poll_group_2", 00:13:03.439 "admin_qpairs": 0, 00:13:03.439 "io_qpairs": 0, 00:13:03.439 "current_admin_qpairs": 0, 00:13:03.439 "current_io_qpairs": 0, 00:13:03.439 "pending_bdev_io": 0, 00:13:03.439 "completed_nvme_io": 0, 00:13:03.439 "transports": [ 00:13:03.439 { 00:13:03.439 "trtype": "TCP" 00:13:03.439 } 00:13:03.439 ] 00:13:03.439 }, 00:13:03.439 { 00:13:03.439 "name": "nvmf_tgt_poll_group_3", 00:13:03.439 "admin_qpairs": 0, 00:13:03.439 "io_qpairs": 0, 00:13:03.439 "current_admin_qpairs": 0, 00:13:03.439 "current_io_qpairs": 0, 00:13:03.439 "pending_bdev_io": 0, 00:13:03.439 "completed_nvme_io": 0, 00:13:03.439 "transports": [ 00:13:03.439 { 00:13:03.439 "trtype": "TCP" 00:13:03.439 } 00:13:03.439 ] 00:13:03.439 } 00:13:03.439 ] 00:13:03.439 }' 00:13:03.439 10:43:20 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:03.439 10:43:20 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:03.439 10:43:20 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:03.439 10:43:20 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.439 10:43:20 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:03.439 10:43:20 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:03.439 10:43:20 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:03.439 10:43:20 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:03.439 10:43:20 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.439 10:43:20 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:03.439 10:43:20 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:03.439 10:43:20 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:03.439 10:43:20 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:03.439 10:43:20 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:03.439 10:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.439 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:13:03.439 Malloc1 00:13:03.439 10:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.439 10:43:20 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.439 10:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.439 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:13:03.439 10:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.439 10:43:20 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.439 10:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.439 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:13:03.697 10:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.697 10:43:20 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:03.697 10:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.697 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:13:03.697 10:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.697 10:43:20 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.697 10:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.697 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:13:03.697 [2024-07-10 10:43:20.279552] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.697 10:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.697 10:43:20 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:03.697 10:43:20 -- common/autotest_common.sh@640 -- # local es=0 00:13:03.697 10:43:20 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:03.697 10:43:20 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:03.697 10:43:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:03.697 10:43:20 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:03.697 10:43:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:03.697 10:43:20 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:03.697 10:43:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:03.697 10:43:20 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:03.697 10:43:20 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:03.697 10:43:20 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:03.697 [2024-07-10 10:43:20.302035] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:03.697 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:03.697 could not add new controller: failed to write to nvme-fabrics device 00:13:03.697 10:43:20 -- common/autotest_common.sh@643 -- # es=1 00:13:03.697 10:43:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:03.697 10:43:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:03.697 10:43:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:03.697 10:43:20 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:03.697 10:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.697 10:43:20 -- common/autotest_common.sh@10 -- # set +x 00:13:03.697 10:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.697 10:43:20 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.263 10:43:21 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.263 10:43:21 -- common/autotest_common.sh@1177 -- # local i=0 00:13:04.263 10:43:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.263 10:43:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:04.263 10:43:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:06.790 10:43:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:06.790 10:43:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:06.790 10:43:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.790 10:43:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:06.790 10:43:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.790 10:43:23 -- common/autotest_common.sh@1187 -- # return 0 00:13:06.790 10:43:23 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.790 10:43:23 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.790 10:43:23 -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.790 10:43:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:06.790 10:43:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.790 10:43:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:06.790 10:43:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.790 10:43:23 -- common/autotest_common.sh@1210 -- # return 0 00:13:06.790 10:43:23 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:06.790 10:43:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.790 10:43:23 -- common/autotest_common.sh@10 -- # set +x 00:13:06.790 10:43:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.790 10:43:23 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.790 10:43:23 -- common/autotest_common.sh@640 -- # local es=0 00:13:06.790 10:43:23 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.790 10:43:23 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:06.790 10:43:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:06.790 10:43:23 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:06.790 10:43:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:06.790 10:43:23 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:06.790 10:43:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:06.790 10:43:23 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:06.790 10:43:23 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:06.790 10:43:23 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.790 [2024-07-10 10:43:23.133334] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:06.790 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:06.790 could not add new controller: failed to write to nvme-fabrics device 00:13:06.790 10:43:23 -- common/autotest_common.sh@643 -- # es=1 00:13:06.790 10:43:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:06.790 10:43:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:06.790 10:43:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:06.790 10:43:23 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:06.790 10:43:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.790 10:43:23 -- common/autotest_common.sh@10 -- # set +x 00:13:06.790 10:43:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.790 10:43:23 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.047 10:43:23 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.047 10:43:23 -- common/autotest_common.sh@1177 -- # local i=0 00:13:07.047 10:43:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.047 10:43:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:07.047 10:43:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:08.947 10:43:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:08.947 10:43:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:08.947 10:43:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.205 10:43:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:09.205 10:43:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.205 10:43:25 -- common/autotest_common.sh@1187 -- # return 0 00:13:09.205 10:43:25 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.205 10:43:25 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.205 10:43:25 -- common/autotest_common.sh@1198 -- # local i=0 00:13:09.205 10:43:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:09.205 10:43:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.205 10:43:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:09.205 10:43:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.205 10:43:25 -- common/autotest_common.sh@1210 -- # return 0 00:13:09.205 10:43:25 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.205 10:43:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.205 10:43:25 -- common/autotest_common.sh@10 -- # set +x 00:13:09.205 10:43:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.205 10:43:25 -- target/rpc.sh@81 -- # seq 1 5 00:13:09.205 10:43:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.205 10:43:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.205 10:43:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.205 10:43:25 -- common/autotest_common.sh@10 -- # set +x 00:13:09.205 10:43:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.205 10:43:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.205 10:43:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.205 10:43:25 -- common/autotest_common.sh@10 -- # set +x 00:13:09.205 [2024-07-10 10:43:25.916655] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.205 10:43:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.205 10:43:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.205 10:43:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.205 10:43:25 -- common/autotest_common.sh@10 -- # set +x 00:13:09.205 10:43:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.205 10:43:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.205 10:43:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.205 10:43:25 -- common/autotest_common.sh@10 -- # set +x 00:13:09.205 10:43:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.205 10:43:25 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.139 10:43:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.139 10:43:26 -- common/autotest_common.sh@1177 -- # local i=0 00:13:10.139 10:43:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.139 10:43:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:10.139 10:43:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:12.085 10:43:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:12.085 10:43:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:12.085 10:43:28 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.085 10:43:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:12.086 10:43:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.086 10:43:28 -- common/autotest_common.sh@1187 -- # return 0 00:13:12.086 10:43:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.086 10:43:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.086 10:43:28 -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.086 10:43:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:12.086 10:43:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.086 10:43:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:12.086 10:43:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.086 10:43:28 -- common/autotest_common.sh@1210 -- # return 0 00:13:12.086 10:43:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.086 10:43:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.086 10:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:12.086 10:43:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.086 10:43:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.086 10:43:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.086 10:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:12.086 10:43:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.086 10:43:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.086 10:43:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.086 10:43:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.086 10:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:12.086 10:43:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.086 10:43:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.086 10:43:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.086 10:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:12.086 [2024-07-10 10:43:28.750205] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.086 10:43:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.086 10:43:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.086 10:43:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.086 10:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:12.086 10:43:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.086 10:43:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.086 10:43:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.086 10:43:28 -- common/autotest_common.sh@10 -- # set +x 00:13:12.086 10:43:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.086 10:43:28 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.652 10:43:29 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.652 10:43:29 -- common/autotest_common.sh@1177 -- # local i=0 00:13:12.652 10:43:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.652 10:43:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:12.652 10:43:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:14.545 10:43:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:14.545 10:43:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:14.545 10:43:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.545 10:43:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:14.545 10:43:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.801 10:43:31 -- common/autotest_common.sh@1187 -- # return 0 00:13:14.801 10:43:31 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.801 10:43:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.801 10:43:31 -- common/autotest_common.sh@1198 -- # local i=0 00:13:14.801 10:43:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:14.801 10:43:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.801 10:43:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:14.801 10:43:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.801 10:43:31 -- common/autotest_common.sh@1210 -- # return 0 00:13:14.801 10:43:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.801 10:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.801 10:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:14.801 10:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.801 10:43:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.801 10:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.801 10:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:14.801 10:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.801 10:43:31 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.801 10:43:31 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.801 10:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.801 10:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:14.801 10:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.801 10:43:31 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.801 10:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.801 10:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:14.801 [2024-07-10 10:43:31.517379] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.801 10:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.801 10:43:31 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.801 10:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.801 10:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:14.801 10:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.801 10:43:31 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.801 10:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.801 10:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:14.801 10:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.801 10:43:31 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.362 10:43:32 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.362 10:43:32 -- common/autotest_common.sh@1177 -- # local i=0 00:13:15.362 10:43:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.362 10:43:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:15.362 10:43:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:17.884 10:43:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:17.884 10:43:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:17.884 10:43:34 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.884 10:43:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:17.884 10:43:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.884 10:43:34 -- common/autotest_common.sh@1187 -- # return 0 00:13:17.884 10:43:34 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.884 10:43:34 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.884 10:43:34 -- common/autotest_common.sh@1198 -- # local i=0 00:13:17.884 10:43:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:17.884 10:43:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.884 10:43:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:17.884 10:43:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.884 10:43:34 -- common/autotest_common.sh@1210 -- # return 0 00:13:17.884 10:43:34 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.884 10:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.884 10:43:34 -- common/autotest_common.sh@10 -- # set +x 00:13:17.884 10:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.884 10:43:34 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.884 10:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.884 10:43:34 -- common/autotest_common.sh@10 -- # set +x 00:13:17.884 10:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.884 10:43:34 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:17.884 10:43:34 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.884 10:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.884 10:43:34 -- common/autotest_common.sh@10 -- # set +x 00:13:17.884 10:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.884 10:43:34 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.884 10:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.884 10:43:34 -- common/autotest_common.sh@10 -- # set +x 00:13:17.884 [2024-07-10 10:43:34.324271] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.884 10:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.884 10:43:34 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:17.884 10:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.884 10:43:34 -- common/autotest_common.sh@10 -- # set +x 00:13:17.884 10:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.884 10:43:34 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.884 10:43:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.884 10:43:34 -- common/autotest_common.sh@10 -- # set +x 00:13:17.884 10:43:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.884 10:43:34 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.141 10:43:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.141 10:43:34 -- common/autotest_common.sh@1177 -- # local i=0 00:13:18.141 10:43:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.141 10:43:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:18.141 10:43:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:20.663 10:43:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:20.663 10:43:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:20.663 10:43:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.663 10:43:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:20.663 10:43:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.663 10:43:36 -- common/autotest_common.sh@1187 -- # return 0 00:13:20.663 10:43:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.663 10:43:37 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.663 10:43:37 -- common/autotest_common.sh@1198 -- # local i=0 00:13:20.663 10:43:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:20.663 10:43:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.663 10:43:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:20.663 10:43:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.663 10:43:37 -- common/autotest_common.sh@1210 -- # return 0 00:13:20.663 10:43:37 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.663 10:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.663 10:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:20.663 10:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.663 10:43:37 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.663 10:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.663 10:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:20.663 10:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.663 10:43:37 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:20.663 10:43:37 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.663 10:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.663 10:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:20.663 10:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.663 10:43:37 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.663 10:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.663 10:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:20.663 [2024-07-10 10:43:37.108986] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.663 10:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.663 10:43:37 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:20.663 10:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.663 10:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:20.663 10:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.663 10:43:37 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.663 10:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.663 10:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:20.663 10:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.663 10:43:37 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.226 10:43:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.226 10:43:37 -- common/autotest_common.sh@1177 -- # local i=0 00:13:21.226 10:43:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.226 10:43:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:21.226 10:43:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:23.122 10:43:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:23.122 10:43:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:23.122 10:43:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.122 10:43:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:23.122 10:43:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.122 10:43:39 -- common/autotest_common.sh@1187 -- # return 0 00:13:23.122 10:43:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.122 10:43:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.122 10:43:39 -- common/autotest_common.sh@1198 -- # local i=0 00:13:23.122 10:43:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:23.122 10:43:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.122 10:43:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:23.122 10:43:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.122 10:43:39 -- common/autotest_common.sh@1210 -- # return 0 00:13:23.122 10:43:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.122 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.122 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 10:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.122 10:43:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.122 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.122 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 10:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.122 10:43:39 -- target/rpc.sh@99 -- # seq 1 5 00:13:23.122 10:43:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.122 10:43:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.122 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.122 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 10:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.122 10:43:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.122 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.122 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 [2024-07-10 10:43:39.933288] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.122 10:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.122 10:43:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.122 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.122 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 10:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.381 10:43:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.381 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.381 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 10:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.381 10:43:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.381 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.381 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 10:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.381 10:43:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.381 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.381 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 10:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.381 10:43:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.381 10:43:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.381 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.381 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 10:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.381 10:43:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.381 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.381 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 [2024-07-10 10:43:39.981380] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.381 10:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.381 10:43:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.381 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.381 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 10:43:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.381 10:43:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.381 10:43:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.381 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.381 10:43:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.381 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.381 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.381 10:43:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.381 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.381 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.381 10:43:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.381 10:43:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.381 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.381 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.381 10:43:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.381 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.381 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 [2024-07-10 10:43:40.029577] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.382 10:43:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 [2024-07-10 10:43:40.077745] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.382 10:43:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 [2024-07-10 10:43:40.125901] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:23.382 10:43:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.382 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:13:23.382 10:43:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.382 10:43:40 -- target/rpc.sh@110 -- # stats='{ 00:13:23.382 "tick_rate": 2700000000, 00:13:23.382 "poll_groups": [ 00:13:23.382 { 00:13:23.382 "name": "nvmf_tgt_poll_group_0", 00:13:23.382 "admin_qpairs": 2, 00:13:23.382 "io_qpairs": 84, 00:13:23.382 "current_admin_qpairs": 0, 00:13:23.382 "current_io_qpairs": 0, 00:13:23.382 "pending_bdev_io": 0, 00:13:23.382 "completed_nvme_io": 182, 00:13:23.382 "transports": [ 00:13:23.382 { 00:13:23.382 "trtype": "TCP" 00:13:23.382 } 00:13:23.382 ] 00:13:23.382 }, 00:13:23.382 { 00:13:23.382 "name": "nvmf_tgt_poll_group_1", 00:13:23.382 "admin_qpairs": 2, 00:13:23.382 "io_qpairs": 84, 00:13:23.382 "current_admin_qpairs": 0, 00:13:23.382 "current_io_qpairs": 0, 00:13:23.382 "pending_bdev_io": 0, 00:13:23.382 "completed_nvme_io": 135, 00:13:23.382 "transports": [ 00:13:23.382 { 00:13:23.382 "trtype": "TCP" 00:13:23.382 } 00:13:23.382 ] 00:13:23.382 }, 00:13:23.382 { 00:13:23.382 "name": "nvmf_tgt_poll_group_2", 00:13:23.382 "admin_qpairs": 1, 00:13:23.382 "io_qpairs": 84, 00:13:23.382 "current_admin_qpairs": 0, 00:13:23.382 "current_io_qpairs": 0, 00:13:23.382 "pending_bdev_io": 0, 00:13:23.382 "completed_nvme_io": 111, 00:13:23.382 "transports": [ 00:13:23.382 { 00:13:23.382 "trtype": "TCP" 00:13:23.382 } 00:13:23.382 ] 00:13:23.382 }, 00:13:23.382 { 00:13:23.382 "name": "nvmf_tgt_poll_group_3", 00:13:23.382 "admin_qpairs": 2, 00:13:23.382 "io_qpairs": 84, 00:13:23.382 "current_admin_qpairs": 0, 00:13:23.382 "current_io_qpairs": 0, 00:13:23.382 "pending_bdev_io": 0, 00:13:23.382 "completed_nvme_io": 258, 00:13:23.382 "transports": [ 00:13:23.382 { 00:13:23.382 "trtype": "TCP" 00:13:23.382 } 00:13:23.382 ] 00:13:23.382 } 00:13:23.382 ] 00:13:23.382 }' 00:13:23.382 10:43:40 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:23.382 10:43:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:23.382 10:43:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:23.382 10:43:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.640 10:43:40 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:23.640 10:43:40 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:23.640 10:43:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:23.640 10:43:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:23.640 10:43:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.640 10:43:40 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:23.640 10:43:40 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:23.640 10:43:40 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:23.640 10:43:40 -- target/rpc.sh@123 -- # nvmftestfini 00:13:23.640 10:43:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:23.640 10:43:40 -- nvmf/common.sh@116 -- # sync 00:13:23.640 10:43:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:23.640 10:43:40 -- nvmf/common.sh@119 -- # set +e 00:13:23.640 10:43:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:23.640 10:43:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:23.640 rmmod nvme_tcp 00:13:23.640 rmmod nvme_fabrics 00:13:23.640 rmmod nvme_keyring 00:13:23.640 10:43:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:23.640 10:43:40 -- nvmf/common.sh@123 -- # set -e 00:13:23.640 10:43:40 -- nvmf/common.sh@124 -- # return 0 00:13:23.640 10:43:40 -- nvmf/common.sh@477 -- # '[' -n 3401746 ']' 00:13:23.640 10:43:40 -- nvmf/common.sh@478 -- # killprocess 3401746 00:13:23.640 10:43:40 -- common/autotest_common.sh@926 -- # '[' -z 3401746 ']' 00:13:23.640 10:43:40 -- common/autotest_common.sh@930 -- # kill -0 3401746 00:13:23.640 10:43:40 -- common/autotest_common.sh@931 -- # uname 00:13:23.640 10:43:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:23.640 10:43:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3401746 00:13:23.640 10:43:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:23.640 10:43:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:23.640 10:43:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3401746' 00:13:23.640 killing process with pid 3401746 00:13:23.640 10:43:40 -- common/autotest_common.sh@945 -- # kill 3401746 00:13:23.640 10:43:40 -- common/autotest_common.sh@950 -- # wait 3401746 00:13:23.899 10:43:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:23.899 10:43:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:23.899 10:43:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:23.899 10:43:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.899 10:43:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:23.899 10:43:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.899 10:43:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.899 10:43:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.803 10:43:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:25.803 00:13:25.803 real 0m25.765s 00:13:25.803 user 1m24.578s 00:13:25.803 sys 0m4.141s 00:13:25.803 10:43:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.803 10:43:42 -- common/autotest_common.sh@10 -- # set +x 00:13:25.803 ************************************ 00:13:25.803 END TEST nvmf_rpc 00:13:25.803 ************************************ 00:13:26.062 10:43:42 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.062 10:43:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:26.062 10:43:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:26.062 10:43:42 -- common/autotest_common.sh@10 -- # set +x 00:13:26.062 ************************************ 00:13:26.062 START TEST nvmf_invalid 00:13:26.062 ************************************ 00:13:26.062 10:43:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.062 * Looking for test storage... 00:13:26.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.062 10:43:42 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.062 10:43:42 -- nvmf/common.sh@7 -- # uname -s 00:13:26.062 10:43:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.062 10:43:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.062 10:43:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.062 10:43:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.062 10:43:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.062 10:43:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.062 10:43:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.062 10:43:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.062 10:43:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.062 10:43:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.062 10:43:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.062 10:43:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.062 10:43:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.062 10:43:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.062 10:43:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.062 10:43:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.062 10:43:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.062 10:43:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.062 10:43:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.062 10:43:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.062 10:43:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.062 10:43:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.062 10:43:42 -- paths/export.sh@5 -- # export PATH 00:13:26.062 10:43:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.062 10:43:42 -- nvmf/common.sh@46 -- # : 0 00:13:26.062 10:43:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:26.062 10:43:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:26.062 10:43:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:26.062 10:43:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.062 10:43:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.062 10:43:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:26.062 10:43:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:26.062 10:43:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:26.062 10:43:42 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:26.062 10:43:42 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.062 10:43:42 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:26.062 10:43:42 -- target/invalid.sh@14 -- # target=foobar 00:13:26.062 10:43:42 -- target/invalid.sh@16 -- # RANDOM=0 00:13:26.062 10:43:42 -- target/invalid.sh@34 -- # nvmftestinit 00:13:26.062 10:43:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:26.062 10:43:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.062 10:43:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:26.062 10:43:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:26.062 10:43:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:26.062 10:43:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.062 10:43:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.062 10:43:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.062 10:43:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:26.062 10:43:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:26.062 10:43:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:26.062 10:43:42 -- common/autotest_common.sh@10 -- # set +x 00:13:27.988 10:43:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:27.988 10:43:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:27.988 10:43:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:27.988 10:43:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:27.988 10:43:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:27.988 10:43:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:27.988 10:43:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:27.988 10:43:44 -- nvmf/common.sh@294 -- # net_devs=() 00:13:27.988 10:43:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:27.988 10:43:44 -- nvmf/common.sh@295 -- # e810=() 00:13:27.988 10:43:44 -- nvmf/common.sh@295 -- # local -ga e810 00:13:27.988 10:43:44 -- nvmf/common.sh@296 -- # x722=() 00:13:27.988 10:43:44 -- nvmf/common.sh@296 -- # local -ga x722 00:13:27.988 10:43:44 -- nvmf/common.sh@297 -- # mlx=() 00:13:27.988 10:43:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:27.988 10:43:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.988 10:43:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.988 10:43:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.988 10:43:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.988 10:43:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.988 10:43:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.988 10:43:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.988 10:43:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.988 10:43:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.988 10:43:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.988 10:43:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.988 10:43:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:27.988 10:43:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:27.988 10:43:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:27.988 10:43:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:27.988 10:43:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:27.988 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:27.988 10:43:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:27.988 10:43:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:27.988 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:27.988 10:43:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:27.988 10:43:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:27.988 10:43:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.988 10:43:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:27.988 10:43:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.988 10:43:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:27.988 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:27.988 10:43:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.988 10:43:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:27.988 10:43:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.988 10:43:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:27.988 10:43:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.988 10:43:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:27.988 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:27.988 10:43:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.988 10:43:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:27.988 10:43:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:27.988 10:43:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:27.988 10:43:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:27.988 10:43:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.988 10:43:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.988 10:43:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.988 10:43:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:27.988 10:43:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.988 10:43:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.988 10:43:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:27.988 10:43:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.988 10:43:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.988 10:43:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:27.988 10:43:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:27.988 10:43:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.988 10:43:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.988 10:43:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.988 10:43:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.988 10:43:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:27.988 10:43:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.245 10:43:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.245 10:43:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.245 10:43:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:28.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:13:28.245 00:13:28.245 --- 10.0.0.2 ping statistics --- 00:13:28.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.245 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:13:28.245 10:43:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:13:28.245 00:13:28.245 --- 10.0.0.1 ping statistics --- 00:13:28.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.245 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:13:28.245 10:43:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.245 10:43:44 -- nvmf/common.sh@410 -- # return 0 00:13:28.245 10:43:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:28.245 10:43:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.245 10:43:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:28.245 10:43:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:28.245 10:43:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.245 10:43:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:28.245 10:43:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:28.245 10:43:44 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:28.245 10:43:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:28.245 10:43:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:28.245 10:43:44 -- common/autotest_common.sh@10 -- # set +x 00:13:28.245 10:43:44 -- nvmf/common.sh@469 -- # nvmfpid=3406458 00:13:28.245 10:43:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.245 10:43:44 -- nvmf/common.sh@470 -- # waitforlisten 3406458 00:13:28.245 10:43:44 -- common/autotest_common.sh@819 -- # '[' -z 3406458 ']' 00:13:28.245 10:43:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.245 10:43:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:28.245 10:43:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.245 10:43:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:28.245 10:43:44 -- common/autotest_common.sh@10 -- # set +x 00:13:28.245 [2024-07-10 10:43:44.944276] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:28.245 [2024-07-10 10:43:44.944352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.245 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.245 [2024-07-10 10:43:45.011509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.501 [2024-07-10 10:43:45.102416] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:28.501 [2024-07-10 10:43:45.102553] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.501 [2024-07-10 10:43:45.102570] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.501 [2024-07-10 10:43:45.102583] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.501 [2024-07-10 10:43:45.102631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.501 [2024-07-10 10:43:45.102689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.501 [2024-07-10 10:43:45.102755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.501 [2024-07-10 10:43:45.102757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.431 10:43:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:29.431 10:43:45 -- common/autotest_common.sh@852 -- # return 0 00:13:29.431 10:43:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:29.431 10:43:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:29.431 10:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:29.431 10:43:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.431 10:43:45 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:29.431 10:43:45 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15473 00:13:29.431 [2024-07-10 10:43:46.151692] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:29.431 10:43:46 -- target/invalid.sh@40 -- # out='request: 00:13:29.431 { 00:13:29.431 "nqn": "nqn.2016-06.io.spdk:cnode15473", 00:13:29.431 "tgt_name": "foobar", 00:13:29.431 "method": "nvmf_create_subsystem", 00:13:29.431 "req_id": 1 00:13:29.431 } 00:13:29.431 Got JSON-RPC error response 00:13:29.431 response: 00:13:29.431 { 00:13:29.431 "code": -32603, 00:13:29.431 "message": "Unable to find target foobar" 00:13:29.431 }' 00:13:29.431 10:43:46 -- target/invalid.sh@41 -- # [[ request: 00:13:29.431 { 00:13:29.431 "nqn": "nqn.2016-06.io.spdk:cnode15473", 00:13:29.431 "tgt_name": "foobar", 00:13:29.431 "method": "nvmf_create_subsystem", 00:13:29.431 "req_id": 1 00:13:29.431 } 00:13:29.431 Got JSON-RPC error response 00:13:29.431 response: 00:13:29.431 { 00:13:29.431 "code": -32603, 00:13:29.431 "message": "Unable to find target foobar" 00:13:29.431 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:29.431 10:43:46 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:29.431 10:43:46 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3145 00:13:29.688 [2024-07-10 10:43:46.392499] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3145: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:29.688 10:43:46 -- target/invalid.sh@45 -- # out='request: 00:13:29.688 { 00:13:29.688 "nqn": "nqn.2016-06.io.spdk:cnode3145", 00:13:29.688 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:29.688 "method": "nvmf_create_subsystem", 00:13:29.688 "req_id": 1 00:13:29.688 } 00:13:29.688 Got JSON-RPC error response 00:13:29.688 response: 00:13:29.688 { 00:13:29.688 "code": -32602, 00:13:29.688 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:29.688 }' 00:13:29.688 10:43:46 -- target/invalid.sh@46 -- # [[ request: 00:13:29.688 { 00:13:29.688 "nqn": "nqn.2016-06.io.spdk:cnode3145", 00:13:29.688 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:29.688 "method": "nvmf_create_subsystem", 00:13:29.688 "req_id": 1 00:13:29.688 } 00:13:29.688 Got JSON-RPC error response 00:13:29.688 response: 00:13:29.688 { 00:13:29.688 "code": -32602, 00:13:29.688 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:29.688 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:29.688 10:43:46 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:29.688 10:43:46 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19949 00:13:29.944 [2024-07-10 10:43:46.633227] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19949: invalid model number 'SPDK_Controller' 00:13:29.944 10:43:46 -- target/invalid.sh@50 -- # out='request: 00:13:29.944 { 00:13:29.944 "nqn": "nqn.2016-06.io.spdk:cnode19949", 00:13:29.944 "model_number": "SPDK_Controller\u001f", 00:13:29.944 "method": "nvmf_create_subsystem", 00:13:29.944 "req_id": 1 00:13:29.944 } 00:13:29.944 Got JSON-RPC error response 00:13:29.944 response: 00:13:29.944 { 00:13:29.944 "code": -32602, 00:13:29.944 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.944 }' 00:13:29.944 10:43:46 -- target/invalid.sh@51 -- # [[ request: 00:13:29.944 { 00:13:29.944 "nqn": "nqn.2016-06.io.spdk:cnode19949", 00:13:29.944 "model_number": "SPDK_Controller\u001f", 00:13:29.944 "method": "nvmf_create_subsystem", 00:13:29.944 "req_id": 1 00:13:29.944 } 00:13:29.944 Got JSON-RPC error response 00:13:29.944 response: 00:13:29.944 { 00:13:29.944 "code": -32602, 00:13:29.944 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.944 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:29.944 10:43:46 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:29.944 10:43:46 -- target/invalid.sh@19 -- # local length=21 ll 00:13:29.944 10:43:46 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:29.944 10:43:46 -- target/invalid.sh@21 -- # local chars 00:13:29.944 10:43:46 -- target/invalid.sh@22 -- # local string 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 52 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+=4 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 113 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+=q 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 46 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+=. 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 107 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+=k 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 39 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+=\' 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 73 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+=I 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 33 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+='!' 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 35 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+='#' 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 116 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+=t 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 86 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+=V 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 118 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+=v 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 126 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+='~' 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.944 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # printf %x 47 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:29.944 10:43:46 -- target/invalid.sh@25 -- # string+=/ 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # printf %x 42 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # string+='*' 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # printf %x 84 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # string+=T 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # printf %x 125 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # string+='}' 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # printf %x 88 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # string+=X 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # printf %x 75 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # string+=K 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # printf %x 70 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # string+=F 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # printf %x 82 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # string+=R 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # printf %x 118 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:29.945 10:43:46 -- target/invalid.sh@25 -- # string+=v 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.945 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.945 10:43:46 -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:13:29.945 10:43:46 -- target/invalid.sh@31 -- # echo '4q.k'\''I!#tVv~/*T}XKFRv' 00:13:29.945 10:43:46 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '4q.k'\''I!#tVv~/*T}XKFRv' nqn.2016-06.io.spdk:cnode12197 00:13:30.202 [2024-07-10 10:43:46.966344] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12197: invalid serial number '4q.k'I!#tVv~/*T}XKFRv' 00:13:30.202 10:43:46 -- target/invalid.sh@54 -- # out='request: 00:13:30.202 { 00:13:30.202 "nqn": "nqn.2016-06.io.spdk:cnode12197", 00:13:30.202 "serial_number": "4q.k'\''I!#tVv~/*T}XKFRv", 00:13:30.202 "method": "nvmf_create_subsystem", 00:13:30.202 "req_id": 1 00:13:30.202 } 00:13:30.202 Got JSON-RPC error response 00:13:30.202 response: 00:13:30.202 { 00:13:30.202 "code": -32602, 00:13:30.202 "message": "Invalid SN 4q.k'\''I!#tVv~/*T}XKFRv" 00:13:30.202 }' 00:13:30.202 10:43:46 -- target/invalid.sh@55 -- # [[ request: 00:13:30.202 { 00:13:30.202 "nqn": "nqn.2016-06.io.spdk:cnode12197", 00:13:30.202 "serial_number": "4q.k'I!#tVv~/*T}XKFRv", 00:13:30.202 "method": "nvmf_create_subsystem", 00:13:30.202 "req_id": 1 00:13:30.202 } 00:13:30.202 Got JSON-RPC error response 00:13:30.202 response: 00:13:30.202 { 00:13:30.202 "code": -32602, 00:13:30.202 "message": "Invalid SN 4q.k'I!#tVv~/*T}XKFRv" 00:13:30.202 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.202 10:43:46 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:30.202 10:43:46 -- target/invalid.sh@19 -- # local length=41 ll 00:13:30.202 10:43:46 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.202 10:43:46 -- target/invalid.sh@21 -- # local chars 00:13:30.202 10:43:46 -- target/invalid.sh@22 -- # local string 00:13:30.202 10:43:46 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.202 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.202 10:43:46 -- target/invalid.sh@25 -- # printf %x 76 00:13:30.202 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:30.202 10:43:46 -- target/invalid.sh@25 -- # string+=L 00:13:30.202 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.202 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.202 10:43:46 -- target/invalid.sh@25 -- # printf %x 83 00:13:30.202 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:30.202 10:43:46 -- target/invalid.sh@25 -- # string+=S 00:13:30.202 10:43:46 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.202 10:43:46 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.202 10:43:46 -- target/invalid.sh@25 -- # printf %x 114 00:13:30.202 10:43:46 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # string+=r 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # printf %x 56 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # string+=8 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # printf %x 84 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # string+=T 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # printf %x 35 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # string+='#' 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # printf %x 89 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # string+=Y 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # printf %x 104 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # string+=h 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # printf %x 78 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # string+=N 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # printf %x 45 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:30.202 10:43:47 -- target/invalid.sh@25 -- # string+=- 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.202 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 110 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=n 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 71 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=G 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 123 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+='{' 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 89 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=Y 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 92 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+='\' 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 39 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=\' 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 95 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=_ 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 74 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=J 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 41 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=')' 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 67 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=C 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 79 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=O 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 60 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+='<' 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 118 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=v 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 107 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=k 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 123 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+='{' 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 84 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=T 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 35 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+='#' 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 98 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=b 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 65 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=A 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # printf %x 108 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:30.461 10:43:47 -- target/invalid.sh@25 -- # string+=l 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # printf %x 65 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # string+=A 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # printf %x 85 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # string+=U 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # printf %x 105 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # string+=i 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # printf %x 112 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # string+=p 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # printf %x 82 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # string+=R 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # printf %x 64 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # string+=@ 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # printf %x 91 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # string+='[' 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # printf %x 124 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # string+='|' 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # printf %x 70 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # string+=F 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # printf %x 38 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # string+='&' 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # printf %x 109 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:30.462 10:43:47 -- target/invalid.sh@25 -- # string+=m 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.462 10:43:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.462 10:43:47 -- target/invalid.sh@28 -- # [[ L == \- ]] 00:13:30.462 10:43:47 -- target/invalid.sh@31 -- # echo 'LSr8T#YhN-nG{Y\'\''_J)CO /dev/null' 00:13:33.032 10:43:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.928 10:43:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:34.928 00:13:34.928 real 0m9.097s 00:13:34.928 user 0m22.021s 00:13:34.928 sys 0m2.465s 00:13:35.184 10:43:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.184 10:43:51 -- common/autotest_common.sh@10 -- # set +x 00:13:35.184 ************************************ 00:13:35.184 END TEST nvmf_invalid 00:13:35.184 ************************************ 00:13:35.184 10:43:51 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:35.184 10:43:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:35.184 10:43:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:35.184 10:43:51 -- common/autotest_common.sh@10 -- # set +x 00:13:35.184 ************************************ 00:13:35.184 START TEST nvmf_abort 00:13:35.184 ************************************ 00:13:35.184 10:43:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:35.184 * Looking for test storage... 00:13:35.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.184 10:43:51 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.184 10:43:51 -- nvmf/common.sh@7 -- # uname -s 00:13:35.184 10:43:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.184 10:43:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.184 10:43:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.184 10:43:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.184 10:43:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.184 10:43:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.184 10:43:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.184 10:43:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.184 10:43:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.184 10:43:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.184 10:43:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:35.184 10:43:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:35.184 10:43:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.184 10:43:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.184 10:43:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.184 10:43:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.184 10:43:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.184 10:43:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.184 10:43:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.184 10:43:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.184 10:43:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.184 10:43:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.184 10:43:51 -- paths/export.sh@5 -- # export PATH 00:13:35.184 10:43:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.184 10:43:51 -- nvmf/common.sh@46 -- # : 0 00:13:35.185 10:43:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:35.185 10:43:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:35.185 10:43:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:35.185 10:43:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.185 10:43:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.185 10:43:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:35.185 10:43:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:35.185 10:43:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:35.185 10:43:51 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.185 10:43:51 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:35.185 10:43:51 -- target/abort.sh@14 -- # nvmftestinit 00:13:35.185 10:43:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:35.185 10:43:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.185 10:43:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:35.185 10:43:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:35.185 10:43:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:35.185 10:43:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.185 10:43:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.185 10:43:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.185 10:43:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:35.185 10:43:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:35.185 10:43:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:35.185 10:43:51 -- common/autotest_common.sh@10 -- # set +x 00:13:37.708 10:43:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:37.708 10:43:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:37.708 10:43:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:37.708 10:43:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:37.708 10:43:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:37.708 10:43:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:37.708 10:43:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:37.708 10:43:53 -- nvmf/common.sh@294 -- # net_devs=() 00:13:37.708 10:43:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:37.708 10:43:53 -- nvmf/common.sh@295 -- # e810=() 00:13:37.708 10:43:53 -- nvmf/common.sh@295 -- # local -ga e810 00:13:37.708 10:43:53 -- nvmf/common.sh@296 -- # x722=() 00:13:37.708 10:43:53 -- nvmf/common.sh@296 -- # local -ga x722 00:13:37.708 10:43:53 -- nvmf/common.sh@297 -- # mlx=() 00:13:37.708 10:43:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:37.708 10:43:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.708 10:43:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.708 10:43:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.708 10:43:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.708 10:43:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.708 10:43:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.708 10:43:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.708 10:43:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.708 10:43:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.708 10:43:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.708 10:43:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.708 10:43:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:37.708 10:43:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:37.708 10:43:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:37.708 10:43:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:37.708 10:43:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:37.708 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:37.708 10:43:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:37.708 10:43:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:37.708 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:37.708 10:43:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:37.708 10:43:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:37.708 10:43:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.708 10:43:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:37.708 10:43:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.708 10:43:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:37.708 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:37.708 10:43:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.708 10:43:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:37.708 10:43:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.708 10:43:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:37.708 10:43:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.708 10:43:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:37.708 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:37.708 10:43:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.708 10:43:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:37.708 10:43:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:37.708 10:43:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:37.708 10:43:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:37.708 10:43:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.708 10:43:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.708 10:43:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.708 10:43:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:37.708 10:43:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.708 10:43:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.708 10:43:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:37.708 10:43:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.708 10:43:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.708 10:43:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:37.708 10:43:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:37.708 10:43:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.708 10:43:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.708 10:43:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.708 10:43:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.708 10:43:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:37.708 10:43:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.708 10:43:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.708 10:43:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.708 10:43:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:37.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:13:37.708 00:13:37.708 --- 10.0.0.2 ping statistics --- 00:13:37.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.708 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:13:37.708 10:43:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:13:37.708 00:13:37.708 --- 10.0.0.1 ping statistics --- 00:13:37.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.708 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:13:37.708 10:43:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.708 10:43:54 -- nvmf/common.sh@410 -- # return 0 00:13:37.708 10:43:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:37.708 10:43:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.708 10:43:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:37.708 10:43:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:37.708 10:43:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.708 10:43:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:37.708 10:43:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:37.708 10:43:54 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:37.708 10:43:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:37.708 10:43:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:37.708 10:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:37.708 10:43:54 -- nvmf/common.sh@469 -- # nvmfpid=3409124 00:13:37.708 10:43:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:37.708 10:43:54 -- nvmf/common.sh@470 -- # waitforlisten 3409124 00:13:37.708 10:43:54 -- common/autotest_common.sh@819 -- # '[' -z 3409124 ']' 00:13:37.708 10:43:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.708 10:43:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:37.709 10:43:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.709 10:43:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:37.709 10:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:37.709 [2024-07-10 10:43:54.143184] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:37.709 [2024-07-10 10:43:54.143272] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.709 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.709 [2024-07-10 10:43:54.211228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:37.709 [2024-07-10 10:43:54.305433] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:37.709 [2024-07-10 10:43:54.305610] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.709 [2024-07-10 10:43:54.305629] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.709 [2024-07-10 10:43:54.305644] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.709 [2024-07-10 10:43:54.305712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.709 [2024-07-10 10:43:54.305781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.709 [2024-07-10 10:43:54.305785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.642 10:43:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:38.642 10:43:55 -- common/autotest_common.sh@852 -- # return 0 00:13:38.642 10:43:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:38.642 10:43:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:38.642 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.642 10:43:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.642 10:43:55 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:38.642 10:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.642 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.642 [2024-07-10 10:43:55.177676] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.642 10:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.642 10:43:55 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:38.642 10:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.642 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.642 Malloc0 00:13:38.642 10:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.642 10:43:55 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:38.642 10:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.642 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.642 Delay0 00:13:38.642 10:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.642 10:43:55 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:38.642 10:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.642 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.642 10:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.642 10:43:55 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:38.642 10:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.642 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.643 10:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.643 10:43:55 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:38.643 10:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.643 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.643 [2024-07-10 10:43:55.253469] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.643 10:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.643 10:43:55 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:38.643 10:43:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.643 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:13:38.643 10:43:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.643 10:43:55 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:38.643 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.643 [2024-07-10 10:43:55.400590] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:41.238 Initializing NVMe Controllers 00:13:41.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:41.238 controller IO queue size 128 less than required 00:13:41.238 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:41.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:41.238 Initialization complete. Launching workers. 00:13:41.238 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33812 00:13:41.238 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33873, failed to submit 62 00:13:41.238 success 33812, unsuccess 61, failed 0 00:13:41.238 10:43:57 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:41.238 10:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.238 10:43:57 -- common/autotest_common.sh@10 -- # set +x 00:13:41.238 10:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.238 10:43:57 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:41.238 10:43:57 -- target/abort.sh@38 -- # nvmftestfini 00:13:41.238 10:43:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:41.238 10:43:57 -- nvmf/common.sh@116 -- # sync 00:13:41.239 10:43:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:41.239 10:43:57 -- nvmf/common.sh@119 -- # set +e 00:13:41.239 10:43:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:41.239 10:43:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:41.239 rmmod nvme_tcp 00:13:41.239 rmmod nvme_fabrics 00:13:41.239 rmmod nvme_keyring 00:13:41.239 10:43:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:41.239 10:43:57 -- nvmf/common.sh@123 -- # set -e 00:13:41.239 10:43:57 -- nvmf/common.sh@124 -- # return 0 00:13:41.239 10:43:57 -- nvmf/common.sh@477 -- # '[' -n 3409124 ']' 00:13:41.239 10:43:57 -- nvmf/common.sh@478 -- # killprocess 3409124 00:13:41.239 10:43:57 -- common/autotest_common.sh@926 -- # '[' -z 3409124 ']' 00:13:41.239 10:43:57 -- common/autotest_common.sh@930 -- # kill -0 3409124 00:13:41.239 10:43:57 -- common/autotest_common.sh@931 -- # uname 00:13:41.239 10:43:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:41.239 10:43:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3409124 00:13:41.239 10:43:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:41.239 10:43:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:41.239 10:43:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3409124' 00:13:41.239 killing process with pid 3409124 00:13:41.239 10:43:57 -- common/autotest_common.sh@945 -- # kill 3409124 00:13:41.239 10:43:57 -- common/autotest_common.sh@950 -- # wait 3409124 00:13:41.239 10:43:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:41.239 10:43:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:41.239 10:43:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:41.239 10:43:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.239 10:43:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:41.239 10:43:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.239 10:43:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.239 10:43:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.141 10:43:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:43.398 00:13:43.398 real 0m8.188s 00:13:43.398 user 0m13.394s 00:13:43.398 sys 0m2.658s 00:13:43.398 10:43:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.398 10:43:59 -- common/autotest_common.sh@10 -- # set +x 00:13:43.398 ************************************ 00:13:43.398 END TEST nvmf_abort 00:13:43.398 ************************************ 00:13:43.398 10:43:59 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:43.398 10:43:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:43.398 10:43:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:43.398 10:43:59 -- common/autotest_common.sh@10 -- # set +x 00:13:43.398 ************************************ 00:13:43.398 START TEST nvmf_ns_hotplug_stress 00:13:43.398 ************************************ 00:13:43.398 10:43:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:43.398 * Looking for test storage... 00:13:43.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.398 10:44:00 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.398 10:44:00 -- nvmf/common.sh@7 -- # uname -s 00:13:43.398 10:44:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.398 10:44:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.398 10:44:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.398 10:44:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.398 10:44:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.398 10:44:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.398 10:44:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.398 10:44:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.398 10:44:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.398 10:44:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.398 10:44:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.398 10:44:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.398 10:44:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.398 10:44:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.398 10:44:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.398 10:44:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.398 10:44:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.398 10:44:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.398 10:44:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.398 10:44:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.398 10:44:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.398 10:44:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.398 10:44:00 -- paths/export.sh@5 -- # export PATH 00:13:43.398 10:44:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.398 10:44:00 -- nvmf/common.sh@46 -- # : 0 00:13:43.398 10:44:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:43.398 10:44:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:43.398 10:44:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:43.398 10:44:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.398 10:44:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.398 10:44:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:43.398 10:44:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:43.398 10:44:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:43.398 10:44:00 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.398 10:44:00 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:43.398 10:44:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:43.398 10:44:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.398 10:44:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:43.398 10:44:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:43.398 10:44:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:43.398 10:44:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.398 10:44:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.398 10:44:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.398 10:44:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:43.398 10:44:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:43.398 10:44:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:43.398 10:44:00 -- common/autotest_common.sh@10 -- # set +x 00:13:45.296 10:44:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:45.296 10:44:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:45.296 10:44:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:45.296 10:44:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:45.296 10:44:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:45.296 10:44:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:45.296 10:44:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:45.296 10:44:02 -- nvmf/common.sh@294 -- # net_devs=() 00:13:45.296 10:44:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:45.296 10:44:02 -- nvmf/common.sh@295 -- # e810=() 00:13:45.296 10:44:02 -- nvmf/common.sh@295 -- # local -ga e810 00:13:45.296 10:44:02 -- nvmf/common.sh@296 -- # x722=() 00:13:45.296 10:44:02 -- nvmf/common.sh@296 -- # local -ga x722 00:13:45.296 10:44:02 -- nvmf/common.sh@297 -- # mlx=() 00:13:45.296 10:44:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:45.296 10:44:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.296 10:44:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.296 10:44:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.296 10:44:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.296 10:44:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.296 10:44:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.296 10:44:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.296 10:44:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.296 10:44:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.296 10:44:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.296 10:44:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.296 10:44:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:45.296 10:44:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:45.296 10:44:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:45.296 10:44:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:45.296 10:44:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:45.296 10:44:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:45.296 10:44:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:45.296 10:44:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:45.296 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:45.296 10:44:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:45.296 10:44:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:45.296 10:44:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.297 10:44:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.297 10:44:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:45.297 10:44:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:45.297 10:44:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:45.297 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:45.297 10:44:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:45.297 10:44:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:45.297 10:44:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.554 10:44:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.554 10:44:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:45.554 10:44:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:45.554 10:44:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:45.554 10:44:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:45.554 10:44:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:45.554 10:44:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.554 10:44:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:45.554 10:44:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.554 10:44:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:45.554 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:45.554 10:44:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.554 10:44:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:45.554 10:44:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.554 10:44:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:45.554 10:44:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.554 10:44:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:45.554 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:45.554 10:44:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.554 10:44:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:45.554 10:44:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:45.554 10:44:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:45.554 10:44:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:45.554 10:44:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:45.554 10:44:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.554 10:44:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.554 10:44:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.554 10:44:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:45.554 10:44:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.554 10:44:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.554 10:44:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:45.554 10:44:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.555 10:44:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.555 10:44:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:45.555 10:44:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:45.555 10:44:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.555 10:44:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.555 10:44:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.555 10:44:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.555 10:44:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:45.555 10:44:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.555 10:44:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.555 10:44:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.555 10:44:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:45.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:13:45.555 00:13:45.555 --- 10.0.0.2 ping statistics --- 00:13:45.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.555 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:13:45.555 10:44:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:13:45.555 00:13:45.555 --- 10.0.0.1 ping statistics --- 00:13:45.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.555 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:45.555 10:44:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.555 10:44:02 -- nvmf/common.sh@410 -- # return 0 00:13:45.555 10:44:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:45.555 10:44:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.555 10:44:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:45.555 10:44:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:45.555 10:44:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.555 10:44:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:45.555 10:44:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:45.555 10:44:02 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:45.555 10:44:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:45.555 10:44:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:45.555 10:44:02 -- common/autotest_common.sh@10 -- # set +x 00:13:45.555 10:44:02 -- nvmf/common.sh@469 -- # nvmfpid=3411500 00:13:45.555 10:44:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:45.555 10:44:02 -- nvmf/common.sh@470 -- # waitforlisten 3411500 00:13:45.555 10:44:02 -- common/autotest_common.sh@819 -- # '[' -z 3411500 ']' 00:13:45.555 10:44:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.555 10:44:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:45.555 10:44:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.555 10:44:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:45.555 10:44:02 -- common/autotest_common.sh@10 -- # set +x 00:13:45.555 [2024-07-10 10:44:02.314571] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:45.555 [2024-07-10 10:44:02.314646] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.555 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.811 [2024-07-10 10:44:02.378250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.811 [2024-07-10 10:44:02.461578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:45.811 [2024-07-10 10:44:02.461753] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.812 [2024-07-10 10:44:02.461770] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.812 [2024-07-10 10:44:02.461782] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.812 [2024-07-10 10:44:02.461873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.812 [2024-07-10 10:44:02.461897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.812 [2024-07-10 10:44:02.461900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.739 10:44:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:46.739 10:44:03 -- common/autotest_common.sh@852 -- # return 0 00:13:46.739 10:44:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:46.739 10:44:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:46.739 10:44:03 -- common/autotest_common.sh@10 -- # set +x 00:13:46.739 10:44:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.739 10:44:03 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:46.739 10:44:03 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:46.739 [2024-07-10 10:44:03.513462] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.739 10:44:03 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:46.995 10:44:03 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.252 [2024-07-10 10:44:03.984015] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.252 10:44:04 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:47.510 10:44:04 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:47.790 Malloc0 00:13:47.790 10:44:04 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:48.070 Delay0 00:13:48.070 10:44:04 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.328 10:44:04 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:48.585 NULL1 00:13:48.585 10:44:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:48.842 10:44:05 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3411941 00:13:48.842 10:44:05 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:48.842 10:44:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:13:48.842 10:44:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.842 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.770 Read completed with error (sct=0, sc=11) 00:13:49.770 10:44:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.027 10:44:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:50.027 10:44:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:50.283 true 00:13:50.283 10:44:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:13:50.283 10:44:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.215 10:44:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.473 10:44:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:51.473 10:44:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:51.473 true 00:13:51.729 10:44:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:13:51.729 10:44:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.729 10:44:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.985 10:44:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:51.985 10:44:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:52.241 true 00:13:52.241 10:44:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:13:52.241 10:44:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.172 10:44:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.428 10:44:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:53.428 10:44:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:53.684 true 00:13:53.684 10:44:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:13:53.684 10:44:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.941 10:44:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.198 10:44:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:54.198 10:44:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:54.454 true 00:13:54.454 10:44:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:13:54.454 10:44:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.382 10:44:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.639 10:44:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:55.639 10:44:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:55.639 true 00:13:55.639 10:44:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:13:55.639 10:44:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.895 10:44:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.152 10:44:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:56.152 10:44:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:56.408 true 00:13:56.408 10:44:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:13:56.408 10:44:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.338 10:44:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.595 10:44:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:57.595 10:44:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:57.852 true 00:13:57.852 10:44:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:13:57.852 10:44:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.108 10:44:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.365 10:44:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:58.365 10:44:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:58.631 true 00:13:58.631 10:44:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:13:58.631 10:44:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.561 10:44:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.818 10:44:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:59.818 10:44:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:00.075 true 00:14:00.075 10:44:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:00.075 10:44:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.333 10:44:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.591 10:44:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:00.591 10:44:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:00.848 true 00:14:00.848 10:44:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:00.848 10:44:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.782 10:44:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.039 10:44:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:02.039 10:44:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:02.296 true 00:14:02.296 10:44:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:02.296 10:44:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.556 10:44:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.814 10:44:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:02.814 10:44:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:03.072 true 00:14:03.072 10:44:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:03.072 10:44:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.005 10:44:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.005 10:44:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:04.005 10:44:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:04.263 true 00:14:04.520 10:44:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:04.520 10:44:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.778 10:44:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.778 10:44:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:04.778 10:44:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:05.036 true 00:14:05.036 10:44:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:05.036 10:44:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.968 10:44:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.225 10:44:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:06.225 10:44:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:06.491 true 00:14:06.491 10:44:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:06.491 10:44:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.748 10:44:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.005 10:44:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:07.005 10:44:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:07.262 true 00:14:07.262 10:44:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:07.262 10:44:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.195 10:44:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.195 10:44:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:08.195 10:44:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:08.453 true 00:14:08.453 10:44:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:08.453 10:44:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.710 10:44:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.968 10:44:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:08.968 10:44:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:09.225 true 00:14:09.225 10:44:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:09.225 10:44:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.598 10:44:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:10.598 10:44:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:10.598 10:44:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:10.856 true 00:14:10.856 10:44:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:10.856 10:44:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.789 10:44:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.789 10:44:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:11.789 10:44:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:12.047 true 00:14:12.047 10:44:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:12.047 10:44:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.304 10:44:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.561 10:44:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:12.561 10:44:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:12.819 true 00:14:12.819 10:44:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:12.819 10:44:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.752 10:44:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.009 10:44:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:14.009 10:44:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:14.267 true 00:14:14.267 10:44:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:14.267 10:44:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.525 10:44:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.525 10:44:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:14.525 10:44:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:14.783 true 00:14:14.783 10:44:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:14.783 10:44:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.715 10:44:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.973 10:44:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:15.973 10:44:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:16.230 true 00:14:16.230 10:44:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:16.230 10:44:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.488 10:44:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.745 10:44:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:16.745 10:44:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:17.002 true 00:14:17.002 10:44:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:17.002 10:44:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.934 10:44:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:18.191 10:44:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:18.192 10:44:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:18.449 true 00:14:18.449 10:44:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:18.449 10:44:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.707 10:44:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.964 10:44:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:18.964 10:44:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:18.964 Initializing NVMe Controllers 00:14:18.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.964 Controller IO queue size 128, less than required. 00:14:18.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.964 Controller IO queue size 128, less than required. 00:14:18.964 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:18.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:18.964 Initialization complete. Launching workers. 00:14:18.964 ======================================================== 00:14:18.964 Latency(us) 00:14:18.965 Device Information : IOPS MiB/s Average min max 00:14:18.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1080.51 0.53 67018.18 2763.29 1040218.30 00:14:18.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12970.95 6.33 9868.00 1869.01 357375.98 00:14:18.965 ======================================================== 00:14:18.965 Total : 14051.46 6.86 14262.66 1869.01 1040218.30 00:14:18.965 00:14:18.965 true 00:14:18.965 10:44:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3411941 00:14:18.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3411941) - No such process 00:14:18.965 10:44:35 -- target/ns_hotplug_stress.sh@53 -- # wait 3411941 00:14:18.965 10:44:35 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.222 10:44:36 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.480 10:44:36 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:19.480 10:44:36 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:19.480 10:44:36 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:19.480 10:44:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.480 10:44:36 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:19.738 null0 00:14:19.738 10:44:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.738 10:44:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.738 10:44:36 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:20.005 null1 00:14:20.005 10:44:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:20.005 10:44:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.005 10:44:36 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:20.324 null2 00:14:20.324 10:44:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:20.324 10:44:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.324 10:44:36 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:20.613 null3 00:14:20.613 10:44:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:20.613 10:44:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.613 10:44:37 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:20.895 null4 00:14:20.895 10:44:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:20.895 10:44:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.895 10:44:37 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:20.895 null5 00:14:20.895 10:44:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:20.895 10:44:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.895 10:44:37 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:21.153 null6 00:14:21.153 10:44:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:21.153 10:44:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:21.153 10:44:37 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:21.411 null7 00:14:21.411 10:44:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:21.411 10:44:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:21.411 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@66 -- # wait 3415970 3415971 3415973 3415975 3415977 3415979 3415981 3415983 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.412 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:21.670 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.670 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:21.670 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:21.670 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:21.670 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.670 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:21.670 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:21.670 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.929 10:44:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:22.188 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:22.188 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:22.188 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:22.188 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.188 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:22.188 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.188 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.188 10:44:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.446 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:22.704 10:44:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:22.704 10:44:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:22.704 10:44:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.704 10:44:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:22.704 10:44:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.704 10:44:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:22.704 10:44:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.704 10:44:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.963 10:44:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:23.222 10:44:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:23.222 10:44:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.222 10:44:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:23.222 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.222 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:23.222 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:23.222 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:23.222 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.481 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:23.739 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.739 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:23.739 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.739 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:23.739 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.739 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:23.739 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:23.739 10:44:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.997 10:44:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.255 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.255 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.255 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.255 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.255 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.255 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.255 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.513 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.772 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.031 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.031 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.031 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.031 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.031 10:44:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.031 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.031 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.031 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.289 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.289 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.289 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.289 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.289 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.289 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.289 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.289 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.289 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.289 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.289 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.290 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.290 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.290 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.290 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.290 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.290 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.290 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.290 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.290 10:44:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.290 10:44:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.548 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.548 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.548 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.548 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.548 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.548 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.548 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.548 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.807 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.065 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.065 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.065 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:26.065 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.065 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.065 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.065 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.065 10:44:42 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.323 10:44:42 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.581 10:44:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:26.581 10:44:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.581 10:44:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.581 10:44:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.581 10:44:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.581 10:44:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.581 10:44:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.581 10:44:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:26.839 10:44:43 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:26.839 10:44:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:26.839 10:44:43 -- nvmf/common.sh@116 -- # sync 00:14:26.839 10:44:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:26.839 10:44:43 -- nvmf/common.sh@119 -- # set +e 00:14:26.839 10:44:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:26.839 10:44:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:26.839 rmmod nvme_tcp 00:14:26.839 rmmod nvme_fabrics 00:14:26.839 rmmod nvme_keyring 00:14:26.839 10:44:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:26.839 10:44:43 -- nvmf/common.sh@123 -- # set -e 00:14:26.839 10:44:43 -- nvmf/common.sh@124 -- # return 0 00:14:26.839 10:44:43 -- nvmf/common.sh@477 -- # '[' -n 3411500 ']' 00:14:26.839 10:44:43 -- nvmf/common.sh@478 -- # killprocess 3411500 00:14:26.839 10:44:43 -- common/autotest_common.sh@926 -- # '[' -z 3411500 ']' 00:14:26.839 10:44:43 -- common/autotest_common.sh@930 -- # kill -0 3411500 00:14:26.839 10:44:43 -- common/autotest_common.sh@931 -- # uname 00:14:26.839 10:44:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:26.839 10:44:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3411500 00:14:26.839 10:44:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:26.839 10:44:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:26.839 10:44:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3411500' 00:14:26.839 killing process with pid 3411500 00:14:26.839 10:44:43 -- common/autotest_common.sh@945 -- # kill 3411500 00:14:26.839 10:44:43 -- common/autotest_common.sh@950 -- # wait 3411500 00:14:27.096 10:44:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:27.096 10:44:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:27.096 10:44:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:27.096 10:44:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.096 10:44:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:27.096 10:44:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.096 10:44:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.096 10:44:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.626 10:44:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:29.626 00:14:29.626 real 0m45.827s 00:14:29.626 user 3m26.950s 00:14:29.626 sys 0m15.644s 00:14:29.626 10:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.626 10:44:45 -- common/autotest_common.sh@10 -- # set +x 00:14:29.626 ************************************ 00:14:29.626 END TEST nvmf_ns_hotplug_stress 00:14:29.626 ************************************ 00:14:29.626 10:44:45 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:29.626 10:44:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:29.626 10:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:29.626 10:44:45 -- common/autotest_common.sh@10 -- # set +x 00:14:29.626 ************************************ 00:14:29.626 START TEST nvmf_connect_stress 00:14:29.626 ************************************ 00:14:29.626 10:44:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:29.626 * Looking for test storage... 00:14:29.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.626 10:44:45 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.626 10:44:45 -- nvmf/common.sh@7 -- # uname -s 00:14:29.626 10:44:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.626 10:44:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.626 10:44:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.626 10:44:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.626 10:44:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.626 10:44:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.626 10:44:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.626 10:44:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.626 10:44:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.626 10:44:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.626 10:44:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.626 10:44:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.626 10:44:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.626 10:44:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.626 10:44:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.626 10:44:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.626 10:44:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.626 10:44:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.626 10:44:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.626 10:44:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.626 10:44:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.626 10:44:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.626 10:44:45 -- paths/export.sh@5 -- # export PATH 00:14:29.626 10:44:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.626 10:44:45 -- nvmf/common.sh@46 -- # : 0 00:14:29.627 10:44:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:29.627 10:44:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:29.627 10:44:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:29.627 10:44:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.627 10:44:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.627 10:44:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:29.627 10:44:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:29.627 10:44:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:29.627 10:44:45 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:29.627 10:44:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:29.627 10:44:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.627 10:44:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:29.627 10:44:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:29.627 10:44:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:29.627 10:44:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.627 10:44:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.627 10:44:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.627 10:44:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:29.627 10:44:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:29.627 10:44:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:29.627 10:44:45 -- common/autotest_common.sh@10 -- # set +x 00:14:31.524 10:44:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:31.524 10:44:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:31.524 10:44:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:31.524 10:44:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:31.524 10:44:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:31.524 10:44:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:31.524 10:44:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:31.524 10:44:47 -- nvmf/common.sh@294 -- # net_devs=() 00:14:31.524 10:44:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:31.524 10:44:47 -- nvmf/common.sh@295 -- # e810=() 00:14:31.524 10:44:47 -- nvmf/common.sh@295 -- # local -ga e810 00:14:31.524 10:44:47 -- nvmf/common.sh@296 -- # x722=() 00:14:31.524 10:44:47 -- nvmf/common.sh@296 -- # local -ga x722 00:14:31.524 10:44:47 -- nvmf/common.sh@297 -- # mlx=() 00:14:31.524 10:44:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:31.524 10:44:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.524 10:44:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.524 10:44:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.524 10:44:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.524 10:44:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.524 10:44:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.524 10:44:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.524 10:44:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.524 10:44:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.524 10:44:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.524 10:44:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.524 10:44:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:31.524 10:44:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:31.524 10:44:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:31.524 10:44:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:31.524 10:44:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:31.524 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:31.524 10:44:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:31.524 10:44:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:31.524 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:31.524 10:44:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:31.524 10:44:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:31.524 10:44:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.524 10:44:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:31.524 10:44:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.524 10:44:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:31.524 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:31.524 10:44:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.524 10:44:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:31.524 10:44:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.524 10:44:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:31.524 10:44:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.524 10:44:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:31.524 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:31.524 10:44:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.524 10:44:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:31.524 10:44:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:31.524 10:44:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:31.524 10:44:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.524 10:44:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.524 10:44:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.524 10:44:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:31.524 10:44:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.524 10:44:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.524 10:44:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:31.524 10:44:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.524 10:44:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.524 10:44:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:31.524 10:44:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:31.524 10:44:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.524 10:44:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.524 10:44:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.524 10:44:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.524 10:44:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:31.524 10:44:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.524 10:44:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.524 10:44:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.524 10:44:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:31.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:14:31.524 00:14:31.524 --- 10.0.0.2 ping statistics --- 00:14:31.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.524 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:31.524 10:44:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:14:31.524 00:14:31.524 --- 10.0.0.1 ping statistics --- 00:14:31.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.524 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:14:31.524 10:44:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.524 10:44:47 -- nvmf/common.sh@410 -- # return 0 00:14:31.524 10:44:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:31.524 10:44:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.524 10:44:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:31.524 10:44:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.525 10:44:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:31.525 10:44:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:31.525 10:44:47 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:31.525 10:44:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:31.525 10:44:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:31.525 10:44:47 -- common/autotest_common.sh@10 -- # set +x 00:14:31.525 10:44:47 -- nvmf/common.sh@469 -- # nvmfpid=3418771 00:14:31.525 10:44:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:31.525 10:44:47 -- nvmf/common.sh@470 -- # waitforlisten 3418771 00:14:31.525 10:44:47 -- common/autotest_common.sh@819 -- # '[' -z 3418771 ']' 00:14:31.525 10:44:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.525 10:44:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:31.525 10:44:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.525 10:44:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:31.525 10:44:47 -- common/autotest_common.sh@10 -- # set +x 00:14:31.525 [2024-07-10 10:44:48.033049] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:31.525 [2024-07-10 10:44:48.033121] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.525 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.525 [2024-07-10 10:44:48.101672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:31.525 [2024-07-10 10:44:48.196270] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:31.525 [2024-07-10 10:44:48.196439] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.525 [2024-07-10 10:44:48.196460] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.525 [2024-07-10 10:44:48.196475] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.525 [2024-07-10 10:44:48.196561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.525 [2024-07-10 10:44:48.200445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.525 [2024-07-10 10:44:48.200451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.457 10:44:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:32.457 10:44:48 -- common/autotest_common.sh@852 -- # return 0 00:14:32.457 10:44:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:32.457 10:44:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:32.457 10:44:48 -- common/autotest_common.sh@10 -- # set +x 00:14:32.457 10:44:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.457 10:44:49 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.457 10:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.457 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:14:32.457 [2024-07-10 10:44:49.020193] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.457 10:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.457 10:44:49 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.457 10:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.457 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:14:32.457 10:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.457 10:44:49 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.457 10:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.457 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:14:32.457 [2024-07-10 10:44:49.050556] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.457 10:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.457 10:44:49 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:32.457 10:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.457 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:14:32.457 NULL1 00:14:32.457 10:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.457 10:44:49 -- target/connect_stress.sh@21 -- # PERF_PID=3418928 00:14:32.457 10:44:49 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:32.457 10:44:49 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:32.457 10:44:49 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.457 10:44:49 -- target/connect_stress.sh@28 -- # cat 00:14:32.457 10:44:49 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:32.457 10:44:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.457 10:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.457 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:14:32.714 10:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.714 10:44:49 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:32.714 10:44:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.714 10:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.715 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:14:32.972 10:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.972 10:44:49 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:32.972 10:44:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.972 10:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.972 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:14:33.536 10:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.536 10:44:50 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:33.536 10:44:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.536 10:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.536 10:44:50 -- common/autotest_common.sh@10 -- # set +x 00:14:33.793 10:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.793 10:44:50 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:33.793 10:44:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.793 10:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.793 10:44:50 -- common/autotest_common.sh@10 -- # set +x 00:14:34.050 10:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.050 10:44:50 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:34.050 10:44:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.050 10:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.050 10:44:50 -- common/autotest_common.sh@10 -- # set +x 00:14:34.307 10:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.307 10:44:51 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:34.307 10:44:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.307 10:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.307 10:44:51 -- common/autotest_common.sh@10 -- # set +x 00:14:34.564 10:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.564 10:44:51 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:34.564 10:44:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.564 10:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.564 10:44:51 -- common/autotest_common.sh@10 -- # set +x 00:14:35.128 10:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.128 10:44:51 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:35.128 10:44:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.128 10:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.128 10:44:51 -- common/autotest_common.sh@10 -- # set +x 00:14:35.385 10:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.385 10:44:51 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:35.385 10:44:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.385 10:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.385 10:44:51 -- common/autotest_common.sh@10 -- # set +x 00:14:35.643 10:44:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.643 10:44:52 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:35.643 10:44:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.643 10:44:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.643 10:44:52 -- common/autotest_common.sh@10 -- # set +x 00:14:35.901 10:44:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.901 10:44:52 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:35.901 10:44:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.901 10:44:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.901 10:44:52 -- common/autotest_common.sh@10 -- # set +x 00:14:36.159 10:44:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.159 10:44:52 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:36.159 10:44:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.159 10:44:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.159 10:44:52 -- common/autotest_common.sh@10 -- # set +x 00:14:36.724 10:44:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.724 10:44:53 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:36.724 10:44:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.724 10:44:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.724 10:44:53 -- common/autotest_common.sh@10 -- # set +x 00:14:36.981 10:44:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.981 10:44:53 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:36.981 10:44:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.981 10:44:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.981 10:44:53 -- common/autotest_common.sh@10 -- # set +x 00:14:37.246 10:44:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.246 10:44:53 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:37.246 10:44:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.246 10:44:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.246 10:44:53 -- common/autotest_common.sh@10 -- # set +x 00:14:37.504 10:44:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.504 10:44:54 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:37.504 10:44:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.504 10:44:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.504 10:44:54 -- common/autotest_common.sh@10 -- # set +x 00:14:37.762 10:44:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.762 10:44:54 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:37.762 10:44:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.762 10:44:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.762 10:44:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.328 10:44:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.328 10:44:54 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:38.328 10:44:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.328 10:44:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.328 10:44:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.585 10:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.585 10:44:55 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:38.585 10:44:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.585 10:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.585 10:44:55 -- common/autotest_common.sh@10 -- # set +x 00:14:38.843 10:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.843 10:44:55 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:38.843 10:44:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.843 10:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.843 10:44:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.101 10:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.101 10:44:55 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:39.101 10:44:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.101 10:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.101 10:44:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.359 10:44:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.359 10:44:56 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:39.359 10:44:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.359 10:44:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.359 10:44:56 -- common/autotest_common.sh@10 -- # set +x 00:14:39.924 10:44:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.924 10:44:56 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:39.924 10:44:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.924 10:44:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.924 10:44:56 -- common/autotest_common.sh@10 -- # set +x 00:14:40.182 10:44:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.182 10:44:56 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:40.182 10:44:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.182 10:44:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.182 10:44:56 -- common/autotest_common.sh@10 -- # set +x 00:14:40.440 10:44:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.440 10:44:57 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:40.440 10:44:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.440 10:44:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.440 10:44:57 -- common/autotest_common.sh@10 -- # set +x 00:14:40.698 10:44:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.698 10:44:57 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:40.698 10:44:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.698 10:44:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.698 10:44:57 -- common/autotest_common.sh@10 -- # set +x 00:14:40.956 10:44:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.956 10:44:57 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:40.956 10:44:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.956 10:44:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.956 10:44:57 -- common/autotest_common.sh@10 -- # set +x 00:14:41.521 10:44:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.521 10:44:58 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:41.521 10:44:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.521 10:44:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.521 10:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:41.778 10:44:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.778 10:44:58 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:41.778 10:44:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.778 10:44:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.778 10:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:42.036 10:44:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.036 10:44:58 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:42.036 10:44:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.036 10:44:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.036 10:44:58 -- common/autotest_common.sh@10 -- # set +x 00:14:42.294 10:44:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.294 10:44:59 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:42.294 10:44:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.294 10:44:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.294 10:44:59 -- common/autotest_common.sh@10 -- # set +x 00:14:42.552 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.810 10:44:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.810 10:44:59 -- target/connect_stress.sh@34 -- # kill -0 3418928 00:14:42.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3418928) - No such process 00:14:42.810 10:44:59 -- target/connect_stress.sh@38 -- # wait 3418928 00:14:42.810 10:44:59 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:42.810 10:44:59 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:42.810 10:44:59 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:42.810 10:44:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:42.810 10:44:59 -- nvmf/common.sh@116 -- # sync 00:14:42.810 10:44:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:42.810 10:44:59 -- nvmf/common.sh@119 -- # set +e 00:14:42.810 10:44:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:42.810 10:44:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:42.810 rmmod nvme_tcp 00:14:42.810 rmmod nvme_fabrics 00:14:42.810 rmmod nvme_keyring 00:14:42.810 10:44:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:42.810 10:44:59 -- nvmf/common.sh@123 -- # set -e 00:14:42.810 10:44:59 -- nvmf/common.sh@124 -- # return 0 00:14:42.810 10:44:59 -- nvmf/common.sh@477 -- # '[' -n 3418771 ']' 00:14:42.810 10:44:59 -- nvmf/common.sh@478 -- # killprocess 3418771 00:14:42.810 10:44:59 -- common/autotest_common.sh@926 -- # '[' -z 3418771 ']' 00:14:42.810 10:44:59 -- common/autotest_common.sh@930 -- # kill -0 3418771 00:14:42.810 10:44:59 -- common/autotest_common.sh@931 -- # uname 00:14:42.810 10:44:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:42.810 10:44:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3418771 00:14:42.810 10:44:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:42.810 10:44:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:42.810 10:44:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3418771' 00:14:42.810 killing process with pid 3418771 00:14:42.810 10:44:59 -- common/autotest_common.sh@945 -- # kill 3418771 00:14:42.810 10:44:59 -- common/autotest_common.sh@950 -- # wait 3418771 00:14:43.068 10:44:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:43.068 10:44:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:43.068 10:44:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:43.069 10:44:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.069 10:44:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:43.069 10:44:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.069 10:44:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.069 10:44:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.975 10:45:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:44.975 00:14:44.975 real 0m15.892s 00:14:44.975 user 0m40.453s 00:14:44.975 sys 0m5.832s 00:14:44.975 10:45:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.975 10:45:01 -- common/autotest_common.sh@10 -- # set +x 00:14:44.975 ************************************ 00:14:44.975 END TEST nvmf_connect_stress 00:14:44.975 ************************************ 00:14:44.975 10:45:01 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:44.975 10:45:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:44.975 10:45:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.975 10:45:01 -- common/autotest_common.sh@10 -- # set +x 00:14:44.975 ************************************ 00:14:44.975 START TEST nvmf_fused_ordering 00:14:44.975 ************************************ 00:14:44.975 10:45:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:45.234 * Looking for test storage... 00:14:45.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.234 10:45:01 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.234 10:45:01 -- nvmf/common.sh@7 -- # uname -s 00:14:45.234 10:45:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.234 10:45:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.234 10:45:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.234 10:45:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.234 10:45:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.234 10:45:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.234 10:45:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.234 10:45:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.234 10:45:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.234 10:45:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.234 10:45:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.234 10:45:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.234 10:45:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.234 10:45:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.234 10:45:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.234 10:45:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.234 10:45:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.234 10:45:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.234 10:45:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.234 10:45:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.234 10:45:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.234 10:45:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.234 10:45:01 -- paths/export.sh@5 -- # export PATH 00:14:45.234 10:45:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.234 10:45:01 -- nvmf/common.sh@46 -- # : 0 00:14:45.234 10:45:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:45.234 10:45:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:45.234 10:45:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:45.234 10:45:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.234 10:45:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.234 10:45:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:45.234 10:45:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:45.234 10:45:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:45.234 10:45:01 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:45.234 10:45:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:45.234 10:45:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.234 10:45:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:45.234 10:45:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:45.234 10:45:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:45.235 10:45:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.235 10:45:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.235 10:45:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.235 10:45:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:45.235 10:45:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:45.235 10:45:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:45.235 10:45:01 -- common/autotest_common.sh@10 -- # set +x 00:14:47.134 10:45:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:47.134 10:45:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:47.134 10:45:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:47.134 10:45:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:47.134 10:45:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:47.134 10:45:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:47.134 10:45:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:47.134 10:45:03 -- nvmf/common.sh@294 -- # net_devs=() 00:14:47.134 10:45:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:47.134 10:45:03 -- nvmf/common.sh@295 -- # e810=() 00:14:47.134 10:45:03 -- nvmf/common.sh@295 -- # local -ga e810 00:14:47.134 10:45:03 -- nvmf/common.sh@296 -- # x722=() 00:14:47.134 10:45:03 -- nvmf/common.sh@296 -- # local -ga x722 00:14:47.134 10:45:03 -- nvmf/common.sh@297 -- # mlx=() 00:14:47.134 10:45:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:47.134 10:45:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.134 10:45:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.134 10:45:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.134 10:45:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.134 10:45:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.134 10:45:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.134 10:45:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.134 10:45:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.134 10:45:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.134 10:45:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.134 10:45:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.134 10:45:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:47.134 10:45:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:47.134 10:45:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:47.134 10:45:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:47.134 10:45:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:47.134 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:47.134 10:45:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:47.134 10:45:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:47.134 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:47.134 10:45:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:47.134 10:45:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:47.134 10:45:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.134 10:45:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:47.134 10:45:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.134 10:45:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:47.134 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:47.134 10:45:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.134 10:45:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:47.134 10:45:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.134 10:45:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:47.134 10:45:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.134 10:45:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:47.134 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:47.134 10:45:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.134 10:45:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:47.134 10:45:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:47.134 10:45:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:47.134 10:45:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.134 10:45:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.134 10:45:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.134 10:45:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:47.134 10:45:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.134 10:45:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.134 10:45:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:47.134 10:45:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.134 10:45:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.134 10:45:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:47.134 10:45:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:47.134 10:45:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.134 10:45:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.134 10:45:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.134 10:45:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.134 10:45:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:47.134 10:45:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.134 10:45:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.134 10:45:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.134 10:45:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:47.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:14:47.134 00:14:47.134 --- 10.0.0.2 ping statistics --- 00:14:47.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.134 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:47.134 10:45:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:47.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:14:47.134 00:14:47.134 --- 10.0.0.1 ping statistics --- 00:14:47.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.134 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:47.134 10:45:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.134 10:45:03 -- nvmf/common.sh@410 -- # return 0 00:14:47.134 10:45:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:47.134 10:45:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.134 10:45:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:47.134 10:45:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.134 10:45:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:47.134 10:45:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:47.392 10:45:03 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:47.392 10:45:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:47.392 10:45:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:47.392 10:45:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.392 10:45:03 -- nvmf/common.sh@469 -- # nvmfpid=3422232 00:14:47.393 10:45:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:47.393 10:45:03 -- nvmf/common.sh@470 -- # waitforlisten 3422232 00:14:47.393 10:45:03 -- common/autotest_common.sh@819 -- # '[' -z 3422232 ']' 00:14:47.393 10:45:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.393 10:45:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:47.393 10:45:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.393 10:45:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:47.393 10:45:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.393 [2024-07-10 10:45:04.015214] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:47.393 [2024-07-10 10:45:04.015304] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.393 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.393 [2024-07-10 10:45:04.079201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.393 [2024-07-10 10:45:04.161548] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:47.393 [2024-07-10 10:45:04.161719] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.393 [2024-07-10 10:45:04.161737] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.393 [2024-07-10 10:45:04.161748] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.393 [2024-07-10 10:45:04.161789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.325 10:45:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:48.325 10:45:04 -- common/autotest_common.sh@852 -- # return 0 00:14:48.325 10:45:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:48.325 10:45:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:48.325 10:45:04 -- common/autotest_common.sh@10 -- # set +x 00:14:48.325 10:45:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.325 10:45:05 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.325 10:45:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.325 10:45:05 -- common/autotest_common.sh@10 -- # set +x 00:14:48.325 [2024-07-10 10:45:05.021775] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.325 10:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.325 10:45:05 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:48.325 10:45:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.325 10:45:05 -- common/autotest_common.sh@10 -- # set +x 00:14:48.325 10:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.325 10:45:05 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.325 10:45:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.325 10:45:05 -- common/autotest_common.sh@10 -- # set +x 00:14:48.325 [2024-07-10 10:45:05.037963] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.325 10:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.325 10:45:05 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:48.325 10:45:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.325 10:45:05 -- common/autotest_common.sh@10 -- # set +x 00:14:48.325 NULL1 00:14:48.325 10:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.325 10:45:05 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:48.325 10:45:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.325 10:45:05 -- common/autotest_common.sh@10 -- # set +x 00:14:48.325 10:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.325 10:45:05 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:48.325 10:45:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.325 10:45:05 -- common/autotest_common.sh@10 -- # set +x 00:14:48.325 10:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.325 10:45:05 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:48.325 [2024-07-10 10:45:05.081259] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:48.325 [2024-07-10 10:45:05.081294] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422384 ] 00:14:48.325 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.890 Attached to nqn.2016-06.io.spdk:cnode1 00:14:48.890 Namespace ID: 1 size: 1GB 00:14:48.890 fused_ordering(0) 00:14:48.890 fused_ordering(1) 00:14:48.890 fused_ordering(2) 00:14:48.890 fused_ordering(3) 00:14:48.890 fused_ordering(4) 00:14:48.890 fused_ordering(5) 00:14:48.890 fused_ordering(6) 00:14:48.890 fused_ordering(7) 00:14:48.890 fused_ordering(8) 00:14:48.890 fused_ordering(9) 00:14:48.890 fused_ordering(10) 00:14:48.890 fused_ordering(11) 00:14:48.890 fused_ordering(12) 00:14:48.890 fused_ordering(13) 00:14:48.890 fused_ordering(14) 00:14:48.890 fused_ordering(15) 00:14:48.890 fused_ordering(16) 00:14:48.890 fused_ordering(17) 00:14:48.890 fused_ordering(18) 00:14:48.890 fused_ordering(19) 00:14:48.890 fused_ordering(20) 00:14:48.890 fused_ordering(21) 00:14:48.890 fused_ordering(22) 00:14:48.890 fused_ordering(23) 00:14:48.890 fused_ordering(24) 00:14:48.890 fused_ordering(25) 00:14:48.890 fused_ordering(26) 00:14:48.890 fused_ordering(27) 00:14:48.890 fused_ordering(28) 00:14:48.890 fused_ordering(29) 00:14:48.890 fused_ordering(30) 00:14:48.890 fused_ordering(31) 00:14:48.890 fused_ordering(32) 00:14:48.890 fused_ordering(33) 00:14:48.890 fused_ordering(34) 00:14:48.890 fused_ordering(35) 00:14:48.890 fused_ordering(36) 00:14:48.890 fused_ordering(37) 00:14:48.890 fused_ordering(38) 00:14:48.890 fused_ordering(39) 00:14:48.890 fused_ordering(40) 00:14:48.890 fused_ordering(41) 00:14:48.890 fused_ordering(42) 00:14:48.890 fused_ordering(43) 00:14:48.890 fused_ordering(44) 00:14:48.890 fused_ordering(45) 00:14:48.890 fused_ordering(46) 00:14:48.890 fused_ordering(47) 00:14:48.890 fused_ordering(48) 00:14:48.890 fused_ordering(49) 00:14:48.890 fused_ordering(50) 00:14:48.890 fused_ordering(51) 00:14:48.890 fused_ordering(52) 00:14:48.890 fused_ordering(53) 00:14:48.890 fused_ordering(54) 00:14:48.890 fused_ordering(55) 00:14:48.890 fused_ordering(56) 00:14:48.890 fused_ordering(57) 00:14:48.890 fused_ordering(58) 00:14:48.890 fused_ordering(59) 00:14:48.890 fused_ordering(60) 00:14:48.890 fused_ordering(61) 00:14:48.890 fused_ordering(62) 00:14:48.890 fused_ordering(63) 00:14:48.890 fused_ordering(64) 00:14:48.890 fused_ordering(65) 00:14:48.890 fused_ordering(66) 00:14:48.890 fused_ordering(67) 00:14:48.890 fused_ordering(68) 00:14:48.890 fused_ordering(69) 00:14:48.890 fused_ordering(70) 00:14:48.890 fused_ordering(71) 00:14:48.890 fused_ordering(72) 00:14:48.890 fused_ordering(73) 00:14:48.890 fused_ordering(74) 00:14:48.890 fused_ordering(75) 00:14:48.890 fused_ordering(76) 00:14:48.890 fused_ordering(77) 00:14:48.890 fused_ordering(78) 00:14:48.890 fused_ordering(79) 00:14:48.890 fused_ordering(80) 00:14:48.890 fused_ordering(81) 00:14:48.890 fused_ordering(82) 00:14:48.890 fused_ordering(83) 00:14:48.890 fused_ordering(84) 00:14:48.890 fused_ordering(85) 00:14:48.890 fused_ordering(86) 00:14:48.890 fused_ordering(87) 00:14:48.890 fused_ordering(88) 00:14:48.890 fused_ordering(89) 00:14:48.890 fused_ordering(90) 00:14:48.890 fused_ordering(91) 00:14:48.890 fused_ordering(92) 00:14:48.890 fused_ordering(93) 00:14:48.890 fused_ordering(94) 00:14:48.890 fused_ordering(95) 00:14:48.890 fused_ordering(96) 00:14:48.890 fused_ordering(97) 00:14:48.890 fused_ordering(98) 00:14:48.890 fused_ordering(99) 00:14:48.890 fused_ordering(100) 00:14:48.890 fused_ordering(101) 00:14:48.890 fused_ordering(102) 00:14:48.890 fused_ordering(103) 00:14:48.890 fused_ordering(104) 00:14:48.890 fused_ordering(105) 00:14:48.890 fused_ordering(106) 00:14:48.890 fused_ordering(107) 00:14:48.890 fused_ordering(108) 00:14:48.890 fused_ordering(109) 00:14:48.890 fused_ordering(110) 00:14:48.890 fused_ordering(111) 00:14:48.890 fused_ordering(112) 00:14:48.890 fused_ordering(113) 00:14:48.890 fused_ordering(114) 00:14:48.890 fused_ordering(115) 00:14:48.890 fused_ordering(116) 00:14:48.890 fused_ordering(117) 00:14:48.890 fused_ordering(118) 00:14:48.890 fused_ordering(119) 00:14:48.890 fused_ordering(120) 00:14:48.890 fused_ordering(121) 00:14:48.890 fused_ordering(122) 00:14:48.890 fused_ordering(123) 00:14:48.890 fused_ordering(124) 00:14:48.890 fused_ordering(125) 00:14:48.890 fused_ordering(126) 00:14:48.890 fused_ordering(127) 00:14:48.890 fused_ordering(128) 00:14:48.890 fused_ordering(129) 00:14:48.890 fused_ordering(130) 00:14:48.890 fused_ordering(131) 00:14:48.890 fused_ordering(132) 00:14:48.890 fused_ordering(133) 00:14:48.890 fused_ordering(134) 00:14:48.890 fused_ordering(135) 00:14:48.890 fused_ordering(136) 00:14:48.890 fused_ordering(137) 00:14:48.890 fused_ordering(138) 00:14:48.890 fused_ordering(139) 00:14:48.890 fused_ordering(140) 00:14:48.890 fused_ordering(141) 00:14:48.890 fused_ordering(142) 00:14:48.890 fused_ordering(143) 00:14:48.890 fused_ordering(144) 00:14:48.890 fused_ordering(145) 00:14:48.890 fused_ordering(146) 00:14:48.890 fused_ordering(147) 00:14:48.890 fused_ordering(148) 00:14:48.890 fused_ordering(149) 00:14:48.890 fused_ordering(150) 00:14:48.890 fused_ordering(151) 00:14:48.890 fused_ordering(152) 00:14:48.890 fused_ordering(153) 00:14:48.890 fused_ordering(154) 00:14:48.890 fused_ordering(155) 00:14:48.890 fused_ordering(156) 00:14:48.890 fused_ordering(157) 00:14:48.890 fused_ordering(158) 00:14:48.890 fused_ordering(159) 00:14:48.890 fused_ordering(160) 00:14:48.890 fused_ordering(161) 00:14:48.890 fused_ordering(162) 00:14:48.890 fused_ordering(163) 00:14:48.890 fused_ordering(164) 00:14:48.890 fused_ordering(165) 00:14:48.890 fused_ordering(166) 00:14:48.890 fused_ordering(167) 00:14:48.890 fused_ordering(168) 00:14:48.890 fused_ordering(169) 00:14:48.890 fused_ordering(170) 00:14:48.890 fused_ordering(171) 00:14:48.890 fused_ordering(172) 00:14:48.890 fused_ordering(173) 00:14:48.890 fused_ordering(174) 00:14:48.890 fused_ordering(175) 00:14:48.890 fused_ordering(176) 00:14:48.890 fused_ordering(177) 00:14:48.890 fused_ordering(178) 00:14:48.890 fused_ordering(179) 00:14:48.890 fused_ordering(180) 00:14:48.890 fused_ordering(181) 00:14:48.890 fused_ordering(182) 00:14:48.890 fused_ordering(183) 00:14:48.890 fused_ordering(184) 00:14:48.891 fused_ordering(185) 00:14:48.891 fused_ordering(186) 00:14:48.891 fused_ordering(187) 00:14:48.891 fused_ordering(188) 00:14:48.891 fused_ordering(189) 00:14:48.891 fused_ordering(190) 00:14:48.891 fused_ordering(191) 00:14:48.891 fused_ordering(192) 00:14:48.891 fused_ordering(193) 00:14:48.891 fused_ordering(194) 00:14:48.891 fused_ordering(195) 00:14:48.891 fused_ordering(196) 00:14:48.891 fused_ordering(197) 00:14:48.891 fused_ordering(198) 00:14:48.891 fused_ordering(199) 00:14:48.891 fused_ordering(200) 00:14:48.891 fused_ordering(201) 00:14:48.891 fused_ordering(202) 00:14:48.891 fused_ordering(203) 00:14:48.891 fused_ordering(204) 00:14:48.891 fused_ordering(205) 00:14:49.456 fused_ordering(206) 00:14:49.456 fused_ordering(207) 00:14:49.456 fused_ordering(208) 00:14:49.456 fused_ordering(209) 00:14:49.456 fused_ordering(210) 00:14:49.456 fused_ordering(211) 00:14:49.456 fused_ordering(212) 00:14:49.456 fused_ordering(213) 00:14:49.456 fused_ordering(214) 00:14:49.456 fused_ordering(215) 00:14:49.456 fused_ordering(216) 00:14:49.456 fused_ordering(217) 00:14:49.456 fused_ordering(218) 00:14:49.456 fused_ordering(219) 00:14:49.456 fused_ordering(220) 00:14:49.456 fused_ordering(221) 00:14:49.456 fused_ordering(222) 00:14:49.456 fused_ordering(223) 00:14:49.456 fused_ordering(224) 00:14:49.456 fused_ordering(225) 00:14:49.456 fused_ordering(226) 00:14:49.456 fused_ordering(227) 00:14:49.456 fused_ordering(228) 00:14:49.456 fused_ordering(229) 00:14:49.456 fused_ordering(230) 00:14:49.456 fused_ordering(231) 00:14:49.456 fused_ordering(232) 00:14:49.456 fused_ordering(233) 00:14:49.456 fused_ordering(234) 00:14:49.456 fused_ordering(235) 00:14:49.456 fused_ordering(236) 00:14:49.456 fused_ordering(237) 00:14:49.456 fused_ordering(238) 00:14:49.456 fused_ordering(239) 00:14:49.456 fused_ordering(240) 00:14:49.456 fused_ordering(241) 00:14:49.456 fused_ordering(242) 00:14:49.456 fused_ordering(243) 00:14:49.456 fused_ordering(244) 00:14:49.456 fused_ordering(245) 00:14:49.456 fused_ordering(246) 00:14:49.456 fused_ordering(247) 00:14:49.456 fused_ordering(248) 00:14:49.456 fused_ordering(249) 00:14:49.456 fused_ordering(250) 00:14:49.456 fused_ordering(251) 00:14:49.456 fused_ordering(252) 00:14:49.456 fused_ordering(253) 00:14:49.456 fused_ordering(254) 00:14:49.456 fused_ordering(255) 00:14:49.456 fused_ordering(256) 00:14:49.456 fused_ordering(257) 00:14:49.456 fused_ordering(258) 00:14:49.456 fused_ordering(259) 00:14:49.456 fused_ordering(260) 00:14:49.456 fused_ordering(261) 00:14:49.456 fused_ordering(262) 00:14:49.456 fused_ordering(263) 00:14:49.456 fused_ordering(264) 00:14:49.456 fused_ordering(265) 00:14:49.456 fused_ordering(266) 00:14:49.456 fused_ordering(267) 00:14:49.456 fused_ordering(268) 00:14:49.456 fused_ordering(269) 00:14:49.456 fused_ordering(270) 00:14:49.456 fused_ordering(271) 00:14:49.456 fused_ordering(272) 00:14:49.456 fused_ordering(273) 00:14:49.456 fused_ordering(274) 00:14:49.456 fused_ordering(275) 00:14:49.456 fused_ordering(276) 00:14:49.456 fused_ordering(277) 00:14:49.456 fused_ordering(278) 00:14:49.456 fused_ordering(279) 00:14:49.456 fused_ordering(280) 00:14:49.456 fused_ordering(281) 00:14:49.456 fused_ordering(282) 00:14:49.456 fused_ordering(283) 00:14:49.456 fused_ordering(284) 00:14:49.456 fused_ordering(285) 00:14:49.456 fused_ordering(286) 00:14:49.456 fused_ordering(287) 00:14:49.456 fused_ordering(288) 00:14:49.456 fused_ordering(289) 00:14:49.456 fused_ordering(290) 00:14:49.456 fused_ordering(291) 00:14:49.456 fused_ordering(292) 00:14:49.456 fused_ordering(293) 00:14:49.456 fused_ordering(294) 00:14:49.456 fused_ordering(295) 00:14:49.456 fused_ordering(296) 00:14:49.456 fused_ordering(297) 00:14:49.456 fused_ordering(298) 00:14:49.456 fused_ordering(299) 00:14:49.456 fused_ordering(300) 00:14:49.456 fused_ordering(301) 00:14:49.456 fused_ordering(302) 00:14:49.456 fused_ordering(303) 00:14:49.456 fused_ordering(304) 00:14:49.456 fused_ordering(305) 00:14:49.456 fused_ordering(306) 00:14:49.456 fused_ordering(307) 00:14:49.456 fused_ordering(308) 00:14:49.456 fused_ordering(309) 00:14:49.456 fused_ordering(310) 00:14:49.456 fused_ordering(311) 00:14:49.456 fused_ordering(312) 00:14:49.456 fused_ordering(313) 00:14:49.456 fused_ordering(314) 00:14:49.456 fused_ordering(315) 00:14:49.456 fused_ordering(316) 00:14:49.456 fused_ordering(317) 00:14:49.456 fused_ordering(318) 00:14:49.456 fused_ordering(319) 00:14:49.456 fused_ordering(320) 00:14:49.456 fused_ordering(321) 00:14:49.456 fused_ordering(322) 00:14:49.456 fused_ordering(323) 00:14:49.456 fused_ordering(324) 00:14:49.456 fused_ordering(325) 00:14:49.456 fused_ordering(326) 00:14:49.456 fused_ordering(327) 00:14:49.456 fused_ordering(328) 00:14:49.456 fused_ordering(329) 00:14:49.456 fused_ordering(330) 00:14:49.456 fused_ordering(331) 00:14:49.456 fused_ordering(332) 00:14:49.456 fused_ordering(333) 00:14:49.456 fused_ordering(334) 00:14:49.456 fused_ordering(335) 00:14:49.456 fused_ordering(336) 00:14:49.456 fused_ordering(337) 00:14:49.456 fused_ordering(338) 00:14:49.456 fused_ordering(339) 00:14:49.456 fused_ordering(340) 00:14:49.456 fused_ordering(341) 00:14:49.456 fused_ordering(342) 00:14:49.456 fused_ordering(343) 00:14:49.456 fused_ordering(344) 00:14:49.456 fused_ordering(345) 00:14:49.456 fused_ordering(346) 00:14:49.456 fused_ordering(347) 00:14:49.456 fused_ordering(348) 00:14:49.456 fused_ordering(349) 00:14:49.456 fused_ordering(350) 00:14:49.456 fused_ordering(351) 00:14:49.456 fused_ordering(352) 00:14:49.456 fused_ordering(353) 00:14:49.456 fused_ordering(354) 00:14:49.456 fused_ordering(355) 00:14:49.456 fused_ordering(356) 00:14:49.456 fused_ordering(357) 00:14:49.456 fused_ordering(358) 00:14:49.456 fused_ordering(359) 00:14:49.456 fused_ordering(360) 00:14:49.456 fused_ordering(361) 00:14:49.456 fused_ordering(362) 00:14:49.456 fused_ordering(363) 00:14:49.456 fused_ordering(364) 00:14:49.456 fused_ordering(365) 00:14:49.456 fused_ordering(366) 00:14:49.456 fused_ordering(367) 00:14:49.456 fused_ordering(368) 00:14:49.456 fused_ordering(369) 00:14:49.456 fused_ordering(370) 00:14:49.456 fused_ordering(371) 00:14:49.456 fused_ordering(372) 00:14:49.456 fused_ordering(373) 00:14:49.456 fused_ordering(374) 00:14:49.456 fused_ordering(375) 00:14:49.456 fused_ordering(376) 00:14:49.456 fused_ordering(377) 00:14:49.456 fused_ordering(378) 00:14:49.456 fused_ordering(379) 00:14:49.456 fused_ordering(380) 00:14:49.456 fused_ordering(381) 00:14:49.456 fused_ordering(382) 00:14:49.456 fused_ordering(383) 00:14:49.456 fused_ordering(384) 00:14:49.456 fused_ordering(385) 00:14:49.456 fused_ordering(386) 00:14:49.456 fused_ordering(387) 00:14:49.456 fused_ordering(388) 00:14:49.456 fused_ordering(389) 00:14:49.456 fused_ordering(390) 00:14:49.456 fused_ordering(391) 00:14:49.456 fused_ordering(392) 00:14:49.456 fused_ordering(393) 00:14:49.456 fused_ordering(394) 00:14:49.456 fused_ordering(395) 00:14:49.456 fused_ordering(396) 00:14:49.456 fused_ordering(397) 00:14:49.456 fused_ordering(398) 00:14:49.456 fused_ordering(399) 00:14:49.456 fused_ordering(400) 00:14:49.456 fused_ordering(401) 00:14:49.456 fused_ordering(402) 00:14:49.456 fused_ordering(403) 00:14:49.456 fused_ordering(404) 00:14:49.456 fused_ordering(405) 00:14:49.456 fused_ordering(406) 00:14:49.456 fused_ordering(407) 00:14:49.456 fused_ordering(408) 00:14:49.456 fused_ordering(409) 00:14:49.456 fused_ordering(410) 00:14:50.021 fused_ordering(411) 00:14:50.021 fused_ordering(412) 00:14:50.021 fused_ordering(413) 00:14:50.021 fused_ordering(414) 00:14:50.021 fused_ordering(415) 00:14:50.021 fused_ordering(416) 00:14:50.021 fused_ordering(417) 00:14:50.021 fused_ordering(418) 00:14:50.021 fused_ordering(419) 00:14:50.021 fused_ordering(420) 00:14:50.021 fused_ordering(421) 00:14:50.021 fused_ordering(422) 00:14:50.021 fused_ordering(423) 00:14:50.021 fused_ordering(424) 00:14:50.021 fused_ordering(425) 00:14:50.021 fused_ordering(426) 00:14:50.021 fused_ordering(427) 00:14:50.021 fused_ordering(428) 00:14:50.021 fused_ordering(429) 00:14:50.021 fused_ordering(430) 00:14:50.021 fused_ordering(431) 00:14:50.021 fused_ordering(432) 00:14:50.021 fused_ordering(433) 00:14:50.021 fused_ordering(434) 00:14:50.021 fused_ordering(435) 00:14:50.021 fused_ordering(436) 00:14:50.021 fused_ordering(437) 00:14:50.021 fused_ordering(438) 00:14:50.021 fused_ordering(439) 00:14:50.021 fused_ordering(440) 00:14:50.021 fused_ordering(441) 00:14:50.021 fused_ordering(442) 00:14:50.021 fused_ordering(443) 00:14:50.021 fused_ordering(444) 00:14:50.021 fused_ordering(445) 00:14:50.021 fused_ordering(446) 00:14:50.021 fused_ordering(447) 00:14:50.021 fused_ordering(448) 00:14:50.021 fused_ordering(449) 00:14:50.021 fused_ordering(450) 00:14:50.021 fused_ordering(451) 00:14:50.021 fused_ordering(452) 00:14:50.021 fused_ordering(453) 00:14:50.021 fused_ordering(454) 00:14:50.021 fused_ordering(455) 00:14:50.021 fused_ordering(456) 00:14:50.021 fused_ordering(457) 00:14:50.021 fused_ordering(458) 00:14:50.021 fused_ordering(459) 00:14:50.021 fused_ordering(460) 00:14:50.021 fused_ordering(461) 00:14:50.021 fused_ordering(462) 00:14:50.021 fused_ordering(463) 00:14:50.021 fused_ordering(464) 00:14:50.021 fused_ordering(465) 00:14:50.021 fused_ordering(466) 00:14:50.021 fused_ordering(467) 00:14:50.021 fused_ordering(468) 00:14:50.021 fused_ordering(469) 00:14:50.021 fused_ordering(470) 00:14:50.021 fused_ordering(471) 00:14:50.021 fused_ordering(472) 00:14:50.021 fused_ordering(473) 00:14:50.021 fused_ordering(474) 00:14:50.021 fused_ordering(475) 00:14:50.021 fused_ordering(476) 00:14:50.021 fused_ordering(477) 00:14:50.021 fused_ordering(478) 00:14:50.021 fused_ordering(479) 00:14:50.021 fused_ordering(480) 00:14:50.021 fused_ordering(481) 00:14:50.021 fused_ordering(482) 00:14:50.021 fused_ordering(483) 00:14:50.021 fused_ordering(484) 00:14:50.021 fused_ordering(485) 00:14:50.021 fused_ordering(486) 00:14:50.021 fused_ordering(487) 00:14:50.021 fused_ordering(488) 00:14:50.021 fused_ordering(489) 00:14:50.021 fused_ordering(490) 00:14:50.021 fused_ordering(491) 00:14:50.021 fused_ordering(492) 00:14:50.021 fused_ordering(493) 00:14:50.021 fused_ordering(494) 00:14:50.021 fused_ordering(495) 00:14:50.021 fused_ordering(496) 00:14:50.021 fused_ordering(497) 00:14:50.021 fused_ordering(498) 00:14:50.021 fused_ordering(499) 00:14:50.021 fused_ordering(500) 00:14:50.021 fused_ordering(501) 00:14:50.021 fused_ordering(502) 00:14:50.021 fused_ordering(503) 00:14:50.021 fused_ordering(504) 00:14:50.021 fused_ordering(505) 00:14:50.021 fused_ordering(506) 00:14:50.021 fused_ordering(507) 00:14:50.021 fused_ordering(508) 00:14:50.021 fused_ordering(509) 00:14:50.021 fused_ordering(510) 00:14:50.021 fused_ordering(511) 00:14:50.021 fused_ordering(512) 00:14:50.021 fused_ordering(513) 00:14:50.021 fused_ordering(514) 00:14:50.021 fused_ordering(515) 00:14:50.021 fused_ordering(516) 00:14:50.021 fused_ordering(517) 00:14:50.021 fused_ordering(518) 00:14:50.021 fused_ordering(519) 00:14:50.021 fused_ordering(520) 00:14:50.021 fused_ordering(521) 00:14:50.021 fused_ordering(522) 00:14:50.021 fused_ordering(523) 00:14:50.021 fused_ordering(524) 00:14:50.021 fused_ordering(525) 00:14:50.021 fused_ordering(526) 00:14:50.021 fused_ordering(527) 00:14:50.021 fused_ordering(528) 00:14:50.021 fused_ordering(529) 00:14:50.021 fused_ordering(530) 00:14:50.021 fused_ordering(531) 00:14:50.021 fused_ordering(532) 00:14:50.021 fused_ordering(533) 00:14:50.021 fused_ordering(534) 00:14:50.021 fused_ordering(535) 00:14:50.021 fused_ordering(536) 00:14:50.021 fused_ordering(537) 00:14:50.021 fused_ordering(538) 00:14:50.021 fused_ordering(539) 00:14:50.021 fused_ordering(540) 00:14:50.021 fused_ordering(541) 00:14:50.021 fused_ordering(542) 00:14:50.021 fused_ordering(543) 00:14:50.021 fused_ordering(544) 00:14:50.021 fused_ordering(545) 00:14:50.021 fused_ordering(546) 00:14:50.021 fused_ordering(547) 00:14:50.021 fused_ordering(548) 00:14:50.021 fused_ordering(549) 00:14:50.021 fused_ordering(550) 00:14:50.021 fused_ordering(551) 00:14:50.021 fused_ordering(552) 00:14:50.021 fused_ordering(553) 00:14:50.021 fused_ordering(554) 00:14:50.021 fused_ordering(555) 00:14:50.021 fused_ordering(556) 00:14:50.021 fused_ordering(557) 00:14:50.021 fused_ordering(558) 00:14:50.021 fused_ordering(559) 00:14:50.021 fused_ordering(560) 00:14:50.021 fused_ordering(561) 00:14:50.021 fused_ordering(562) 00:14:50.021 fused_ordering(563) 00:14:50.021 fused_ordering(564) 00:14:50.021 fused_ordering(565) 00:14:50.021 fused_ordering(566) 00:14:50.021 fused_ordering(567) 00:14:50.021 fused_ordering(568) 00:14:50.022 fused_ordering(569) 00:14:50.022 fused_ordering(570) 00:14:50.022 fused_ordering(571) 00:14:50.022 fused_ordering(572) 00:14:50.022 fused_ordering(573) 00:14:50.022 fused_ordering(574) 00:14:50.022 fused_ordering(575) 00:14:50.022 fused_ordering(576) 00:14:50.022 fused_ordering(577) 00:14:50.022 fused_ordering(578) 00:14:50.022 fused_ordering(579) 00:14:50.022 fused_ordering(580) 00:14:50.022 fused_ordering(581) 00:14:50.022 fused_ordering(582) 00:14:50.022 fused_ordering(583) 00:14:50.022 fused_ordering(584) 00:14:50.022 fused_ordering(585) 00:14:50.022 fused_ordering(586) 00:14:50.022 fused_ordering(587) 00:14:50.022 fused_ordering(588) 00:14:50.022 fused_ordering(589) 00:14:50.022 fused_ordering(590) 00:14:50.022 fused_ordering(591) 00:14:50.022 fused_ordering(592) 00:14:50.022 fused_ordering(593) 00:14:50.022 fused_ordering(594) 00:14:50.022 fused_ordering(595) 00:14:50.022 fused_ordering(596) 00:14:50.022 fused_ordering(597) 00:14:50.022 fused_ordering(598) 00:14:50.022 fused_ordering(599) 00:14:50.022 fused_ordering(600) 00:14:50.022 fused_ordering(601) 00:14:50.022 fused_ordering(602) 00:14:50.022 fused_ordering(603) 00:14:50.022 fused_ordering(604) 00:14:50.022 fused_ordering(605) 00:14:50.022 fused_ordering(606) 00:14:50.022 fused_ordering(607) 00:14:50.022 fused_ordering(608) 00:14:50.022 fused_ordering(609) 00:14:50.022 fused_ordering(610) 00:14:50.022 fused_ordering(611) 00:14:50.022 fused_ordering(612) 00:14:50.022 fused_ordering(613) 00:14:50.022 fused_ordering(614) 00:14:50.022 fused_ordering(615) 00:14:50.586 fused_ordering(616) 00:14:50.586 fused_ordering(617) 00:14:50.586 fused_ordering(618) 00:14:50.586 fused_ordering(619) 00:14:50.586 fused_ordering(620) 00:14:50.586 fused_ordering(621) 00:14:50.586 fused_ordering(622) 00:14:50.586 fused_ordering(623) 00:14:50.586 fused_ordering(624) 00:14:50.586 fused_ordering(625) 00:14:50.586 fused_ordering(626) 00:14:50.586 fused_ordering(627) 00:14:50.586 fused_ordering(628) 00:14:50.586 fused_ordering(629) 00:14:50.586 fused_ordering(630) 00:14:50.586 fused_ordering(631) 00:14:50.586 fused_ordering(632) 00:14:50.586 fused_ordering(633) 00:14:50.586 fused_ordering(634) 00:14:50.586 fused_ordering(635) 00:14:50.586 fused_ordering(636) 00:14:50.586 fused_ordering(637) 00:14:50.586 fused_ordering(638) 00:14:50.586 fused_ordering(639) 00:14:50.586 fused_ordering(640) 00:14:50.586 fused_ordering(641) 00:14:50.586 fused_ordering(642) 00:14:50.586 fused_ordering(643) 00:14:50.586 fused_ordering(644) 00:14:50.586 fused_ordering(645) 00:14:50.586 fused_ordering(646) 00:14:50.586 fused_ordering(647) 00:14:50.586 fused_ordering(648) 00:14:50.586 fused_ordering(649) 00:14:50.586 fused_ordering(650) 00:14:50.586 fused_ordering(651) 00:14:50.586 fused_ordering(652) 00:14:50.586 fused_ordering(653) 00:14:50.586 fused_ordering(654) 00:14:50.586 fused_ordering(655) 00:14:50.586 fused_ordering(656) 00:14:50.586 fused_ordering(657) 00:14:50.586 fused_ordering(658) 00:14:50.586 fused_ordering(659) 00:14:50.586 fused_ordering(660) 00:14:50.586 fused_ordering(661) 00:14:50.586 fused_ordering(662) 00:14:50.586 fused_ordering(663) 00:14:50.586 fused_ordering(664) 00:14:50.586 fused_ordering(665) 00:14:50.586 fused_ordering(666) 00:14:50.586 fused_ordering(667) 00:14:50.586 fused_ordering(668) 00:14:50.586 fused_ordering(669) 00:14:50.586 fused_ordering(670) 00:14:50.586 fused_ordering(671) 00:14:50.586 fused_ordering(672) 00:14:50.586 fused_ordering(673) 00:14:50.586 fused_ordering(674) 00:14:50.586 fused_ordering(675) 00:14:50.586 fused_ordering(676) 00:14:50.586 fused_ordering(677) 00:14:50.586 fused_ordering(678) 00:14:50.586 fused_ordering(679) 00:14:50.586 fused_ordering(680) 00:14:50.586 fused_ordering(681) 00:14:50.586 fused_ordering(682) 00:14:50.586 fused_ordering(683) 00:14:50.586 fused_ordering(684) 00:14:50.586 fused_ordering(685) 00:14:50.586 fused_ordering(686) 00:14:50.586 fused_ordering(687) 00:14:50.586 fused_ordering(688) 00:14:50.586 fused_ordering(689) 00:14:50.586 fused_ordering(690) 00:14:50.586 fused_ordering(691) 00:14:50.586 fused_ordering(692) 00:14:50.586 fused_ordering(693) 00:14:50.586 fused_ordering(694) 00:14:50.586 fused_ordering(695) 00:14:50.586 fused_ordering(696) 00:14:50.586 fused_ordering(697) 00:14:50.586 fused_ordering(698) 00:14:50.586 fused_ordering(699) 00:14:50.586 fused_ordering(700) 00:14:50.586 fused_ordering(701) 00:14:50.586 fused_ordering(702) 00:14:50.586 fused_ordering(703) 00:14:50.586 fused_ordering(704) 00:14:50.586 fused_ordering(705) 00:14:50.586 fused_ordering(706) 00:14:50.586 fused_ordering(707) 00:14:50.586 fused_ordering(708) 00:14:50.586 fused_ordering(709) 00:14:50.586 fused_ordering(710) 00:14:50.586 fused_ordering(711) 00:14:50.586 fused_ordering(712) 00:14:50.586 fused_ordering(713) 00:14:50.586 fused_ordering(714) 00:14:50.586 fused_ordering(715) 00:14:50.586 fused_ordering(716) 00:14:50.586 fused_ordering(717) 00:14:50.586 fused_ordering(718) 00:14:50.586 fused_ordering(719) 00:14:50.586 fused_ordering(720) 00:14:50.586 fused_ordering(721) 00:14:50.586 fused_ordering(722) 00:14:50.586 fused_ordering(723) 00:14:50.586 fused_ordering(724) 00:14:50.586 fused_ordering(725) 00:14:50.586 fused_ordering(726) 00:14:50.586 fused_ordering(727) 00:14:50.586 fused_ordering(728) 00:14:50.586 fused_ordering(729) 00:14:50.586 fused_ordering(730) 00:14:50.586 fused_ordering(731) 00:14:50.586 fused_ordering(732) 00:14:50.586 fused_ordering(733) 00:14:50.586 fused_ordering(734) 00:14:50.586 fused_ordering(735) 00:14:50.586 fused_ordering(736) 00:14:50.586 fused_ordering(737) 00:14:50.586 fused_ordering(738) 00:14:50.586 fused_ordering(739) 00:14:50.586 fused_ordering(740) 00:14:50.586 fused_ordering(741) 00:14:50.586 fused_ordering(742) 00:14:50.586 fused_ordering(743) 00:14:50.586 fused_ordering(744) 00:14:50.586 fused_ordering(745) 00:14:50.586 fused_ordering(746) 00:14:50.586 fused_ordering(747) 00:14:50.586 fused_ordering(748) 00:14:50.586 fused_ordering(749) 00:14:50.586 fused_ordering(750) 00:14:50.586 fused_ordering(751) 00:14:50.586 fused_ordering(752) 00:14:50.586 fused_ordering(753) 00:14:50.586 fused_ordering(754) 00:14:50.586 fused_ordering(755) 00:14:50.586 fused_ordering(756) 00:14:50.586 fused_ordering(757) 00:14:50.586 fused_ordering(758) 00:14:50.586 fused_ordering(759) 00:14:50.586 fused_ordering(760) 00:14:50.586 fused_ordering(761) 00:14:50.586 fused_ordering(762) 00:14:50.586 fused_ordering(763) 00:14:50.586 fused_ordering(764) 00:14:50.586 fused_ordering(765) 00:14:50.586 fused_ordering(766) 00:14:50.586 fused_ordering(767) 00:14:50.586 fused_ordering(768) 00:14:50.586 fused_ordering(769) 00:14:50.586 fused_ordering(770) 00:14:50.586 fused_ordering(771) 00:14:50.586 fused_ordering(772) 00:14:50.586 fused_ordering(773) 00:14:50.586 fused_ordering(774) 00:14:50.586 fused_ordering(775) 00:14:50.586 fused_ordering(776) 00:14:50.586 fused_ordering(777) 00:14:50.586 fused_ordering(778) 00:14:50.586 fused_ordering(779) 00:14:50.586 fused_ordering(780) 00:14:50.586 fused_ordering(781) 00:14:50.586 fused_ordering(782) 00:14:50.586 fused_ordering(783) 00:14:50.586 fused_ordering(784) 00:14:50.586 fused_ordering(785) 00:14:50.586 fused_ordering(786) 00:14:50.586 fused_ordering(787) 00:14:50.586 fused_ordering(788) 00:14:50.586 fused_ordering(789) 00:14:50.586 fused_ordering(790) 00:14:50.586 fused_ordering(791) 00:14:50.586 fused_ordering(792) 00:14:50.586 fused_ordering(793) 00:14:50.586 fused_ordering(794) 00:14:50.586 fused_ordering(795) 00:14:50.586 fused_ordering(796) 00:14:50.586 fused_ordering(797) 00:14:50.586 fused_ordering(798) 00:14:50.586 fused_ordering(799) 00:14:50.586 fused_ordering(800) 00:14:50.586 fused_ordering(801) 00:14:50.586 fused_ordering(802) 00:14:50.586 fused_ordering(803) 00:14:50.586 fused_ordering(804) 00:14:50.586 fused_ordering(805) 00:14:50.586 fused_ordering(806) 00:14:50.587 fused_ordering(807) 00:14:50.587 fused_ordering(808) 00:14:50.587 fused_ordering(809) 00:14:50.587 fused_ordering(810) 00:14:50.587 fused_ordering(811) 00:14:50.587 fused_ordering(812) 00:14:50.587 fused_ordering(813) 00:14:50.587 fused_ordering(814) 00:14:50.587 fused_ordering(815) 00:14:50.587 fused_ordering(816) 00:14:50.587 fused_ordering(817) 00:14:50.587 fused_ordering(818) 00:14:50.587 fused_ordering(819) 00:14:50.587 fused_ordering(820) 00:14:51.520 fused_ordering(821) 00:14:51.520 fused_ordering(822) 00:14:51.520 fused_ordering(823) 00:14:51.520 fused_ordering(824) 00:14:51.520 fused_ordering(825) 00:14:51.520 fused_ordering(826) 00:14:51.520 fused_ordering(827) 00:14:51.520 fused_ordering(828) 00:14:51.520 fused_ordering(829) 00:14:51.520 fused_ordering(830) 00:14:51.520 fused_ordering(831) 00:14:51.520 fused_ordering(832) 00:14:51.520 fused_ordering(833) 00:14:51.520 fused_ordering(834) 00:14:51.520 fused_ordering(835) 00:14:51.520 fused_ordering(836) 00:14:51.520 fused_ordering(837) 00:14:51.520 fused_ordering(838) 00:14:51.520 fused_ordering(839) 00:14:51.520 fused_ordering(840) 00:14:51.520 fused_ordering(841) 00:14:51.520 fused_ordering(842) 00:14:51.520 fused_ordering(843) 00:14:51.520 fused_ordering(844) 00:14:51.520 fused_ordering(845) 00:14:51.520 fused_ordering(846) 00:14:51.520 fused_ordering(847) 00:14:51.520 fused_ordering(848) 00:14:51.520 fused_ordering(849) 00:14:51.520 fused_ordering(850) 00:14:51.520 fused_ordering(851) 00:14:51.520 fused_ordering(852) 00:14:51.520 fused_ordering(853) 00:14:51.520 fused_ordering(854) 00:14:51.520 fused_ordering(855) 00:14:51.520 fused_ordering(856) 00:14:51.520 fused_ordering(857) 00:14:51.520 fused_ordering(858) 00:14:51.520 fused_ordering(859) 00:14:51.520 fused_ordering(860) 00:14:51.520 fused_ordering(861) 00:14:51.520 fused_ordering(862) 00:14:51.520 fused_ordering(863) 00:14:51.520 fused_ordering(864) 00:14:51.520 fused_ordering(865) 00:14:51.520 fused_ordering(866) 00:14:51.520 fused_ordering(867) 00:14:51.520 fused_ordering(868) 00:14:51.520 fused_ordering(869) 00:14:51.520 fused_ordering(870) 00:14:51.520 fused_ordering(871) 00:14:51.520 fused_ordering(872) 00:14:51.520 fused_ordering(873) 00:14:51.520 fused_ordering(874) 00:14:51.520 fused_ordering(875) 00:14:51.520 fused_ordering(876) 00:14:51.520 fused_ordering(877) 00:14:51.520 fused_ordering(878) 00:14:51.520 fused_ordering(879) 00:14:51.520 fused_ordering(880) 00:14:51.520 fused_ordering(881) 00:14:51.520 fused_ordering(882) 00:14:51.520 fused_ordering(883) 00:14:51.520 fused_ordering(884) 00:14:51.520 fused_ordering(885) 00:14:51.520 fused_ordering(886) 00:14:51.520 fused_ordering(887) 00:14:51.520 fused_ordering(888) 00:14:51.521 fused_ordering(889) 00:14:51.521 fused_ordering(890) 00:14:51.521 fused_ordering(891) 00:14:51.521 fused_ordering(892) 00:14:51.521 fused_ordering(893) 00:14:51.521 fused_ordering(894) 00:14:51.521 fused_ordering(895) 00:14:51.521 fused_ordering(896) 00:14:51.521 fused_ordering(897) 00:14:51.521 fused_ordering(898) 00:14:51.521 fused_ordering(899) 00:14:51.521 fused_ordering(900) 00:14:51.521 fused_ordering(901) 00:14:51.521 fused_ordering(902) 00:14:51.521 fused_ordering(903) 00:14:51.521 fused_ordering(904) 00:14:51.521 fused_ordering(905) 00:14:51.521 fused_ordering(906) 00:14:51.521 fused_ordering(907) 00:14:51.521 fused_ordering(908) 00:14:51.521 fused_ordering(909) 00:14:51.521 fused_ordering(910) 00:14:51.521 fused_ordering(911) 00:14:51.521 fused_ordering(912) 00:14:51.521 fused_ordering(913) 00:14:51.521 fused_ordering(914) 00:14:51.521 fused_ordering(915) 00:14:51.521 fused_ordering(916) 00:14:51.521 fused_ordering(917) 00:14:51.521 fused_ordering(918) 00:14:51.521 fused_ordering(919) 00:14:51.521 fused_ordering(920) 00:14:51.521 fused_ordering(921) 00:14:51.521 fused_ordering(922) 00:14:51.521 fused_ordering(923) 00:14:51.521 fused_ordering(924) 00:14:51.521 fused_ordering(925) 00:14:51.521 fused_ordering(926) 00:14:51.521 fused_ordering(927) 00:14:51.521 fused_ordering(928) 00:14:51.521 fused_ordering(929) 00:14:51.521 fused_ordering(930) 00:14:51.521 fused_ordering(931) 00:14:51.521 fused_ordering(932) 00:14:51.521 fused_ordering(933) 00:14:51.521 fused_ordering(934) 00:14:51.521 fused_ordering(935) 00:14:51.521 fused_ordering(936) 00:14:51.521 fused_ordering(937) 00:14:51.521 fused_ordering(938) 00:14:51.521 fused_ordering(939) 00:14:51.521 fused_ordering(940) 00:14:51.521 fused_ordering(941) 00:14:51.521 fused_ordering(942) 00:14:51.521 fused_ordering(943) 00:14:51.521 fused_ordering(944) 00:14:51.521 fused_ordering(945) 00:14:51.521 fused_ordering(946) 00:14:51.521 fused_ordering(947) 00:14:51.521 fused_ordering(948) 00:14:51.521 fused_ordering(949) 00:14:51.521 fused_ordering(950) 00:14:51.521 fused_ordering(951) 00:14:51.521 fused_ordering(952) 00:14:51.521 fused_ordering(953) 00:14:51.521 fused_ordering(954) 00:14:51.521 fused_ordering(955) 00:14:51.521 fused_ordering(956) 00:14:51.521 fused_ordering(957) 00:14:51.521 fused_ordering(958) 00:14:51.521 fused_ordering(959) 00:14:51.521 fused_ordering(960) 00:14:51.521 fused_ordering(961) 00:14:51.521 fused_ordering(962) 00:14:51.521 fused_ordering(963) 00:14:51.521 fused_ordering(964) 00:14:51.521 fused_ordering(965) 00:14:51.521 fused_ordering(966) 00:14:51.521 fused_ordering(967) 00:14:51.521 fused_ordering(968) 00:14:51.521 fused_ordering(969) 00:14:51.521 fused_ordering(970) 00:14:51.521 fused_ordering(971) 00:14:51.521 fused_ordering(972) 00:14:51.521 fused_ordering(973) 00:14:51.521 fused_ordering(974) 00:14:51.521 fused_ordering(975) 00:14:51.521 fused_ordering(976) 00:14:51.521 fused_ordering(977) 00:14:51.521 fused_ordering(978) 00:14:51.521 fused_ordering(979) 00:14:51.521 fused_ordering(980) 00:14:51.521 fused_ordering(981) 00:14:51.521 fused_ordering(982) 00:14:51.521 fused_ordering(983) 00:14:51.521 fused_ordering(984) 00:14:51.521 fused_ordering(985) 00:14:51.521 fused_ordering(986) 00:14:51.521 fused_ordering(987) 00:14:51.521 fused_ordering(988) 00:14:51.521 fused_ordering(989) 00:14:51.521 fused_ordering(990) 00:14:51.521 fused_ordering(991) 00:14:51.521 fused_ordering(992) 00:14:51.521 fused_ordering(993) 00:14:51.521 fused_ordering(994) 00:14:51.521 fused_ordering(995) 00:14:51.521 fused_ordering(996) 00:14:51.521 fused_ordering(997) 00:14:51.521 fused_ordering(998) 00:14:51.521 fused_ordering(999) 00:14:51.521 fused_ordering(1000) 00:14:51.521 fused_ordering(1001) 00:14:51.521 fused_ordering(1002) 00:14:51.521 fused_ordering(1003) 00:14:51.521 fused_ordering(1004) 00:14:51.521 fused_ordering(1005) 00:14:51.521 fused_ordering(1006) 00:14:51.521 fused_ordering(1007) 00:14:51.521 fused_ordering(1008) 00:14:51.521 fused_ordering(1009) 00:14:51.521 fused_ordering(1010) 00:14:51.521 fused_ordering(1011) 00:14:51.521 fused_ordering(1012) 00:14:51.521 fused_ordering(1013) 00:14:51.521 fused_ordering(1014) 00:14:51.521 fused_ordering(1015) 00:14:51.521 fused_ordering(1016) 00:14:51.521 fused_ordering(1017) 00:14:51.521 fused_ordering(1018) 00:14:51.521 fused_ordering(1019) 00:14:51.521 fused_ordering(1020) 00:14:51.521 fused_ordering(1021) 00:14:51.521 fused_ordering(1022) 00:14:51.521 fused_ordering(1023) 00:14:51.521 10:45:08 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:51.521 10:45:08 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:51.521 10:45:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:51.521 10:45:08 -- nvmf/common.sh@116 -- # sync 00:14:51.521 10:45:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:51.521 10:45:08 -- nvmf/common.sh@119 -- # set +e 00:14:51.521 10:45:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:51.521 10:45:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:51.521 rmmod nvme_tcp 00:14:51.521 rmmod nvme_fabrics 00:14:51.521 rmmod nvme_keyring 00:14:51.521 10:45:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:51.521 10:45:08 -- nvmf/common.sh@123 -- # set -e 00:14:51.521 10:45:08 -- nvmf/common.sh@124 -- # return 0 00:14:51.521 10:45:08 -- nvmf/common.sh@477 -- # '[' -n 3422232 ']' 00:14:51.521 10:45:08 -- nvmf/common.sh@478 -- # killprocess 3422232 00:14:51.521 10:45:08 -- common/autotest_common.sh@926 -- # '[' -z 3422232 ']' 00:14:51.521 10:45:08 -- common/autotest_common.sh@930 -- # kill -0 3422232 00:14:51.521 10:45:08 -- common/autotest_common.sh@931 -- # uname 00:14:51.521 10:45:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:51.521 10:45:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3422232 00:14:51.521 10:45:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:51.521 10:45:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:51.521 10:45:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3422232' 00:14:51.521 killing process with pid 3422232 00:14:51.521 10:45:08 -- common/autotest_common.sh@945 -- # kill 3422232 00:14:51.521 10:45:08 -- common/autotest_common.sh@950 -- # wait 3422232 00:14:51.521 10:45:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:51.521 10:45:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:51.521 10:45:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:51.521 10:45:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.521 10:45:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:51.521 10:45:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.521 10:45:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.521 10:45:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.057 10:45:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:54.057 00:14:54.057 real 0m8.588s 00:14:54.057 user 0m6.420s 00:14:54.057 sys 0m3.637s 00:14:54.057 10:45:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.057 10:45:10 -- common/autotest_common.sh@10 -- # set +x 00:14:54.057 ************************************ 00:14:54.057 END TEST nvmf_fused_ordering 00:14:54.057 ************************************ 00:14:54.057 10:45:10 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:54.057 10:45:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:54.057 10:45:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:54.057 10:45:10 -- common/autotest_common.sh@10 -- # set +x 00:14:54.057 ************************************ 00:14:54.057 START TEST nvmf_delete_subsystem 00:14:54.057 ************************************ 00:14:54.057 10:45:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:54.057 * Looking for test storage... 00:14:54.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.057 10:45:10 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.057 10:45:10 -- nvmf/common.sh@7 -- # uname -s 00:14:54.057 10:45:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.057 10:45:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.057 10:45:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.057 10:45:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.057 10:45:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.057 10:45:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.057 10:45:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.057 10:45:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.057 10:45:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.057 10:45:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.057 10:45:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.057 10:45:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.057 10:45:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.057 10:45:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.057 10:45:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.057 10:45:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.057 10:45:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.057 10:45:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.057 10:45:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.057 10:45:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.057 10:45:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.057 10:45:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.057 10:45:10 -- paths/export.sh@5 -- # export PATH 00:14:54.057 10:45:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.057 10:45:10 -- nvmf/common.sh@46 -- # : 0 00:14:54.057 10:45:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:54.057 10:45:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:54.057 10:45:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:54.057 10:45:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.057 10:45:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.057 10:45:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:54.057 10:45:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:54.057 10:45:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:54.057 10:45:10 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:54.057 10:45:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:54.057 10:45:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.057 10:45:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:54.057 10:45:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:54.057 10:45:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:54.057 10:45:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.057 10:45:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.057 10:45:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.057 10:45:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:54.057 10:45:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:54.057 10:45:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:54.057 10:45:10 -- common/autotest_common.sh@10 -- # set +x 00:14:56.002 10:45:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:56.002 10:45:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:56.002 10:45:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:56.002 10:45:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:56.002 10:45:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:56.002 10:45:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:56.002 10:45:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:56.002 10:45:12 -- nvmf/common.sh@294 -- # net_devs=() 00:14:56.002 10:45:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:56.002 10:45:12 -- nvmf/common.sh@295 -- # e810=() 00:14:56.002 10:45:12 -- nvmf/common.sh@295 -- # local -ga e810 00:14:56.002 10:45:12 -- nvmf/common.sh@296 -- # x722=() 00:14:56.002 10:45:12 -- nvmf/common.sh@296 -- # local -ga x722 00:14:56.002 10:45:12 -- nvmf/common.sh@297 -- # mlx=() 00:14:56.002 10:45:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:56.002 10:45:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.002 10:45:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.002 10:45:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.002 10:45:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.002 10:45:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.002 10:45:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.002 10:45:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.002 10:45:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.002 10:45:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.002 10:45:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.002 10:45:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.002 10:45:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:56.002 10:45:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:56.002 10:45:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:56.002 10:45:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:56.002 10:45:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:56.002 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:56.002 10:45:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:56.002 10:45:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:56.002 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:56.002 10:45:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:56.002 10:45:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:56.002 10:45:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.002 10:45:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:56.002 10:45:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.002 10:45:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:56.002 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:56.002 10:45:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.002 10:45:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:56.002 10:45:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.002 10:45:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:56.002 10:45:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.002 10:45:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:56.002 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:56.002 10:45:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.002 10:45:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:56.002 10:45:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:56.002 10:45:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:56.002 10:45:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:56.002 10:45:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.002 10:45:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.002 10:45:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.002 10:45:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:56.002 10:45:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.002 10:45:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.002 10:45:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:56.002 10:45:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.002 10:45:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.002 10:45:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:56.002 10:45:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:56.002 10:45:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.002 10:45:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.002 10:45:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.002 10:45:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.002 10:45:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:56.002 10:45:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.002 10:45:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.002 10:45:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.002 10:45:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:56.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:14:56.003 00:14:56.003 --- 10.0.0.2 ping statistics --- 00:14:56.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.003 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:14:56.003 10:45:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:14:56.003 00:14:56.003 --- 10.0.0.1 ping statistics --- 00:14:56.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.003 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:14:56.003 10:45:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.003 10:45:12 -- nvmf/common.sh@410 -- # return 0 00:14:56.003 10:45:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:56.003 10:45:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.003 10:45:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:56.003 10:45:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:56.003 10:45:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.003 10:45:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:56.003 10:45:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:56.003 10:45:12 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:56.003 10:45:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:56.003 10:45:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:56.003 10:45:12 -- common/autotest_common.sh@10 -- # set +x 00:14:56.003 10:45:12 -- nvmf/common.sh@469 -- # nvmfpid=3425239 00:14:56.003 10:45:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:56.003 10:45:12 -- nvmf/common.sh@470 -- # waitforlisten 3425239 00:14:56.003 10:45:12 -- common/autotest_common.sh@819 -- # '[' -z 3425239 ']' 00:14:56.003 10:45:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.003 10:45:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:56.003 10:45:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.003 10:45:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:56.003 10:45:12 -- common/autotest_common.sh@10 -- # set +x 00:14:56.003 [2024-07-10 10:45:12.550645] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:56.003 [2024-07-10 10:45:12.550741] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.003 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.003 [2024-07-10 10:45:12.616391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:56.003 [2024-07-10 10:45:12.704047] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:56.003 [2024-07-10 10:45:12.704188] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.003 [2024-07-10 10:45:12.704204] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.003 [2024-07-10 10:45:12.704217] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.003 [2024-07-10 10:45:12.704294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.003 [2024-07-10 10:45:12.704299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.936 10:45:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:56.936 10:45:13 -- common/autotest_common.sh@852 -- # return 0 00:14:56.936 10:45:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:56.936 10:45:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:56.936 10:45:13 -- common/autotest_common.sh@10 -- # set +x 00:14:56.936 10:45:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.936 10:45:13 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:56.936 10:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.936 10:45:13 -- common/autotest_common.sh@10 -- # set +x 00:14:56.936 [2024-07-10 10:45:13.532996] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.936 10:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.936 10:45:13 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:56.936 10:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.936 10:45:13 -- common/autotest_common.sh@10 -- # set +x 00:14:56.936 10:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.936 10:45:13 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.936 10:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.936 10:45:13 -- common/autotest_common.sh@10 -- # set +x 00:14:56.936 [2024-07-10 10:45:13.549148] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.936 10:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.936 10:45:13 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:56.936 10:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.936 10:45:13 -- common/autotest_common.sh@10 -- # set +x 00:14:56.936 NULL1 00:14:56.936 10:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.936 10:45:13 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:56.936 10:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.936 10:45:13 -- common/autotest_common.sh@10 -- # set +x 00:14:56.936 Delay0 00:14:56.936 10:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.936 10:45:13 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.936 10:45:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:56.936 10:45:13 -- common/autotest_common.sh@10 -- # set +x 00:14:56.936 10:45:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:56.936 10:45:13 -- target/delete_subsystem.sh@28 -- # perf_pid=3425397 00:14:56.936 10:45:13 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:56.936 10:45:13 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:56.936 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.936 [2024-07-10 10:45:13.623880] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:58.832 10:45:15 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.832 10:45:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.832 10:45:15 -- common/autotest_common.sh@10 -- # set +x 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 [2024-07-10 10:45:15.715565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f786c00c1d0 is same with the state(5) to be set 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.091 Write completed with error (sct=0, sc=8) 00:14:59.091 starting I/O failed: -6 00:14:59.091 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 [2024-07-10 10:45:15.716544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f786c000c00 is same with the state(5) to be set 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 Write completed with error (sct=0, sc=8) 00:14:59.092 Read completed with error (sct=0, sc=8) 00:14:59.092 starting I/O failed: -6 00:14:59.092 starting I/O failed: -6 00:14:59.092 starting I/O failed: -6 00:14:59.092 starting I/O failed: -6 00:14:59.092 starting I/O failed: -6 00:15:00.027 [2024-07-10 10:45:16.682002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a65e0 is same with the state(5) to be set 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 [2024-07-10 10:45:16.714681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f786c00c480 is same with the state(5) to be set 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Write completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.027 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 [2024-07-10 10:45:16.718170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a3060 is same with the state(5) to be set 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 [2024-07-10 10:45:16.718452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a3650 is same with the state(5) to be set 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Write completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 Read completed with error (sct=0, sc=8) 00:15:00.028 [2024-07-10 10:45:16.718615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f786c00bf20 is same with the state(5) to be set 00:15:00.028 [2024-07-10 10:45:16.719648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a65e0 (9): Bad file descriptor 00:15:00.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:00.028 10:45:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.028 10:45:16 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:00.028 10:45:16 -- target/delete_subsystem.sh@35 -- # kill -0 3425397 00:15:00.028 10:45:16 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:00.028 Initializing NVMe Controllers 00:15:00.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:00.028 Controller IO queue size 128, less than required. 00:15:00.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:00.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:00.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:00.028 Initialization complete. Launching workers. 00:15:00.028 ======================================================== 00:15:00.028 Latency(us) 00:15:00.028 Device Information : IOPS MiB/s Average min max 00:15:00.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.54 0.09 912805.61 668.12 1014200.87 00:15:00.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.23 0.08 924370.62 1025.55 1011520.76 00:15:00.028 ======================================================== 00:15:00.028 Total : 338.77 0.17 918173.26 668.12 1014200.87 00:15:00.028 00:15:00.594 10:45:17 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:00.594 10:45:17 -- target/delete_subsystem.sh@35 -- # kill -0 3425397 00:15:00.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3425397) - No such process 00:15:00.594 10:45:17 -- target/delete_subsystem.sh@45 -- # NOT wait 3425397 00:15:00.594 10:45:17 -- common/autotest_common.sh@640 -- # local es=0 00:15:00.594 10:45:17 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 3425397 00:15:00.594 10:45:17 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:00.594 10:45:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:00.594 10:45:17 -- common/autotest_common.sh@632 -- # type -t wait 00:15:00.594 10:45:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:00.594 10:45:17 -- common/autotest_common.sh@643 -- # wait 3425397 00:15:00.594 10:45:17 -- common/autotest_common.sh@643 -- # es=1 00:15:00.594 10:45:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:00.594 10:45:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:00.594 10:45:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:00.594 10:45:17 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:00.594 10:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.594 10:45:17 -- common/autotest_common.sh@10 -- # set +x 00:15:00.594 10:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.594 10:45:17 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.594 10:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.594 10:45:17 -- common/autotest_common.sh@10 -- # set +x 00:15:00.594 [2024-07-10 10:45:17.243544] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.594 10:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.594 10:45:17 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.594 10:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.594 10:45:17 -- common/autotest_common.sh@10 -- # set +x 00:15:00.594 10:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:00.594 10:45:17 -- target/delete_subsystem.sh@54 -- # perf_pid=3425813 00:15:00.594 10:45:17 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:00.594 10:45:17 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:00.594 10:45:17 -- target/delete_subsystem.sh@57 -- # kill -0 3425813 00:15:00.594 10:45:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:00.594 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.594 [2024-07-10 10:45:17.306053] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:01.160 10:45:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:01.160 10:45:17 -- target/delete_subsystem.sh@57 -- # kill -0 3425813 00:15:01.160 10:45:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:01.725 10:45:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:01.725 10:45:18 -- target/delete_subsystem.sh@57 -- # kill -0 3425813 00:15:01.725 10:45:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:01.982 10:45:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:01.982 10:45:18 -- target/delete_subsystem.sh@57 -- # kill -0 3425813 00:15:01.982 10:45:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:02.548 10:45:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:02.548 10:45:19 -- target/delete_subsystem.sh@57 -- # kill -0 3425813 00:15:02.548 10:45:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.113 10:45:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.113 10:45:19 -- target/delete_subsystem.sh@57 -- # kill -0 3425813 00:15:03.113 10:45:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.676 10:45:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.676 10:45:20 -- target/delete_subsystem.sh@57 -- # kill -0 3425813 00:15:03.676 10:45:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.676 Initializing NVMe Controllers 00:15:03.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.676 Controller IO queue size 128, less than required. 00:15:03.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:03.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:03.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:03.676 Initialization complete. Launching workers. 00:15:03.676 ======================================================== 00:15:03.676 Latency(us) 00:15:03.676 Device Information : IOPS MiB/s Average min max 00:15:03.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003927.13 1000223.59 1041494.33 00:15:03.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005101.55 1000235.96 1012856.61 00:15:03.676 ======================================================== 00:15:03.676 Total : 256.00 0.12 1004514.34 1000223.59 1041494.33 00:15:03.676 00:15:04.242 10:45:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.242 10:45:20 -- target/delete_subsystem.sh@57 -- # kill -0 3425813 00:15:04.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3425813) - No such process 00:15:04.242 10:45:20 -- target/delete_subsystem.sh@67 -- # wait 3425813 00:15:04.242 10:45:20 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:04.242 10:45:20 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:04.242 10:45:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:04.242 10:45:20 -- nvmf/common.sh@116 -- # sync 00:15:04.242 10:45:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:04.242 10:45:20 -- nvmf/common.sh@119 -- # set +e 00:15:04.242 10:45:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:04.242 10:45:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:04.242 rmmod nvme_tcp 00:15:04.242 rmmod nvme_fabrics 00:15:04.242 rmmod nvme_keyring 00:15:04.242 10:45:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:04.242 10:45:20 -- nvmf/common.sh@123 -- # set -e 00:15:04.242 10:45:20 -- nvmf/common.sh@124 -- # return 0 00:15:04.242 10:45:20 -- nvmf/common.sh@477 -- # '[' -n 3425239 ']' 00:15:04.242 10:45:20 -- nvmf/common.sh@478 -- # killprocess 3425239 00:15:04.242 10:45:20 -- common/autotest_common.sh@926 -- # '[' -z 3425239 ']' 00:15:04.242 10:45:20 -- common/autotest_common.sh@930 -- # kill -0 3425239 00:15:04.242 10:45:20 -- common/autotest_common.sh@931 -- # uname 00:15:04.242 10:45:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:04.242 10:45:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3425239 00:15:04.242 10:45:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:04.242 10:45:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:04.242 10:45:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3425239' 00:15:04.242 killing process with pid 3425239 00:15:04.242 10:45:20 -- common/autotest_common.sh@945 -- # kill 3425239 00:15:04.242 10:45:20 -- common/autotest_common.sh@950 -- # wait 3425239 00:15:04.500 10:45:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:04.500 10:45:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:04.501 10:45:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:04.501 10:45:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.501 10:45:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:04.501 10:45:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.501 10:45:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.501 10:45:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.405 10:45:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:06.405 00:15:06.405 real 0m12.719s 00:15:06.405 user 0m29.112s 00:15:06.405 sys 0m2.759s 00:15:06.405 10:45:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.405 10:45:23 -- common/autotest_common.sh@10 -- # set +x 00:15:06.405 ************************************ 00:15:06.405 END TEST nvmf_delete_subsystem 00:15:06.405 ************************************ 00:15:06.405 10:45:23 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:06.405 10:45:23 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:06.405 10:45:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:06.405 10:45:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:06.405 10:45:23 -- common/autotest_common.sh@10 -- # set +x 00:15:06.405 ************************************ 00:15:06.405 START TEST nvmf_nvme_cli 00:15:06.405 ************************************ 00:15:06.405 10:45:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:06.405 * Looking for test storage... 00:15:06.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.405 10:45:23 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.405 10:45:23 -- nvmf/common.sh@7 -- # uname -s 00:15:06.405 10:45:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.405 10:45:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.405 10:45:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.405 10:45:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.405 10:45:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.405 10:45:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.405 10:45:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.405 10:45:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.405 10:45:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.405 10:45:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.405 10:45:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.405 10:45:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.405 10:45:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.405 10:45:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.405 10:45:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.405 10:45:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.405 10:45:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.405 10:45:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.405 10:45:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.405 10:45:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.405 10:45:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.405 10:45:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.405 10:45:23 -- paths/export.sh@5 -- # export PATH 00:15:06.405 10:45:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.405 10:45:23 -- nvmf/common.sh@46 -- # : 0 00:15:06.405 10:45:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:06.405 10:45:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:06.405 10:45:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:06.405 10:45:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.405 10:45:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.405 10:45:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:06.405 10:45:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:06.405 10:45:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:06.405 10:45:23 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.405 10:45:23 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.405 10:45:23 -- target/nvme_cli.sh@14 -- # devs=() 00:15:06.405 10:45:23 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:06.405 10:45:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:06.405 10:45:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.405 10:45:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:06.405 10:45:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:06.405 10:45:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:06.405 10:45:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.405 10:45:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.405 10:45:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.405 10:45:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:06.405 10:45:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:06.405 10:45:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:06.405 10:45:23 -- common/autotest_common.sh@10 -- # set +x 00:15:08.934 10:45:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:08.934 10:45:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:08.934 10:45:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:08.934 10:45:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:08.934 10:45:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:08.934 10:45:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:08.934 10:45:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:08.934 10:45:25 -- nvmf/common.sh@294 -- # net_devs=() 00:15:08.934 10:45:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:08.934 10:45:25 -- nvmf/common.sh@295 -- # e810=() 00:15:08.934 10:45:25 -- nvmf/common.sh@295 -- # local -ga e810 00:15:08.934 10:45:25 -- nvmf/common.sh@296 -- # x722=() 00:15:08.934 10:45:25 -- nvmf/common.sh@296 -- # local -ga x722 00:15:08.934 10:45:25 -- nvmf/common.sh@297 -- # mlx=() 00:15:08.934 10:45:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:08.934 10:45:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.934 10:45:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.934 10:45:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.934 10:45:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.934 10:45:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.934 10:45:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.934 10:45:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.934 10:45:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.934 10:45:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.934 10:45:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.934 10:45:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.934 10:45:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:08.934 10:45:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:08.934 10:45:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:08.934 10:45:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.934 10:45:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:08.934 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:08.934 10:45:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.934 10:45:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:08.934 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:08.934 10:45:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:08.934 10:45:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:08.934 10:45:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.934 10:45:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.934 10:45:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.934 10:45:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.935 10:45:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:08.935 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:08.935 10:45:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.935 10:45:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.935 10:45:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.935 10:45:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.935 10:45:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.935 10:45:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:08.935 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:08.935 10:45:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.935 10:45:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:08.935 10:45:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:08.935 10:45:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:08.935 10:45:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:08.935 10:45:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:08.935 10:45:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.935 10:45:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.935 10:45:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.935 10:45:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:08.935 10:45:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.935 10:45:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.935 10:45:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:08.935 10:45:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.935 10:45:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.935 10:45:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:08.935 10:45:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:08.935 10:45:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.935 10:45:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.935 10:45:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.935 10:45:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.935 10:45:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:08.935 10:45:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.935 10:45:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.935 10:45:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.935 10:45:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:08.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:15:08.935 00:15:08.935 --- 10.0.0.2 ping statistics --- 00:15:08.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.935 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:15:08.935 10:45:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:15:08.935 00:15:08.935 --- 10.0.0.1 ping statistics --- 00:15:08.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.935 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:15:08.935 10:45:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.935 10:45:25 -- nvmf/common.sh@410 -- # return 0 00:15:08.935 10:45:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:08.935 10:45:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.935 10:45:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:08.935 10:45:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:08.935 10:45:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.935 10:45:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:08.935 10:45:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:08.935 10:45:25 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:08.935 10:45:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:08.935 10:45:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:08.935 10:45:25 -- common/autotest_common.sh@10 -- # set +x 00:15:08.935 10:45:25 -- nvmf/common.sh@469 -- # nvmfpid=3428176 00:15:08.935 10:45:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:08.935 10:45:25 -- nvmf/common.sh@470 -- # waitforlisten 3428176 00:15:08.935 10:45:25 -- common/autotest_common.sh@819 -- # '[' -z 3428176 ']' 00:15:08.935 10:45:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.935 10:45:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:08.935 10:45:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.935 10:45:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:08.935 10:45:25 -- common/autotest_common.sh@10 -- # set +x 00:15:08.935 [2024-07-10 10:45:25.394991] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:08.935 [2024-07-10 10:45:25.395071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.935 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.935 [2024-07-10 10:45:25.467342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:08.935 [2024-07-10 10:45:25.556716] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:08.935 [2024-07-10 10:45:25.556891] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.935 [2024-07-10 10:45:25.556911] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.935 [2024-07-10 10:45:25.556925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.935 [2024-07-10 10:45:25.557012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.935 [2024-07-10 10:45:25.557081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.935 [2024-07-10 10:45:25.557180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.935 [2024-07-10 10:45:25.557182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.868 10:45:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:09.868 10:45:26 -- common/autotest_common.sh@852 -- # return 0 00:15:09.868 10:45:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:09.868 10:45:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:09.868 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:15:09.868 10:45:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.868 10:45:26 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:09.868 10:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.868 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:15:09.868 [2024-07-10 10:45:26.394096] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.868 10:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.868 10:45:26 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:09.868 10:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.868 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:15:09.868 Malloc0 00:15:09.868 10:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.868 10:45:26 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:09.868 10:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.868 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:15:09.868 Malloc1 00:15:09.868 10:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.868 10:45:26 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:09.868 10:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.868 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:15:09.868 10:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.868 10:45:26 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.868 10:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.868 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:15:09.868 10:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.868 10:45:26 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.868 10:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.868 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:15:09.868 10:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.868 10:45:26 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.868 10:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.868 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:15:09.868 [2024-07-10 10:45:26.480301] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.868 10:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.868 10:45:26 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:09.868 10:45:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.868 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:15:09.868 10:45:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.868 10:45:26 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:09.868 00:15:09.868 Discovery Log Number of Records 2, Generation counter 2 00:15:09.868 =====Discovery Log Entry 0====== 00:15:09.868 trtype: tcp 00:15:09.868 adrfam: ipv4 00:15:09.868 subtype: current discovery subsystem 00:15:09.868 treq: not required 00:15:09.868 portid: 0 00:15:09.868 trsvcid: 4420 00:15:09.868 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:09.868 traddr: 10.0.0.2 00:15:09.868 eflags: explicit discovery connections, duplicate discovery information 00:15:09.868 sectype: none 00:15:09.868 =====Discovery Log Entry 1====== 00:15:09.868 trtype: tcp 00:15:09.868 adrfam: ipv4 00:15:09.868 subtype: nvme subsystem 00:15:09.868 treq: not required 00:15:09.868 portid: 0 00:15:09.868 trsvcid: 4420 00:15:09.868 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:09.868 traddr: 10.0.0.2 00:15:09.868 eflags: none 00:15:09.868 sectype: none 00:15:09.868 10:45:26 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:09.868 10:45:26 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:09.868 10:45:26 -- nvmf/common.sh@510 -- # local dev _ 00:15:09.868 10:45:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:09.868 10:45:26 -- nvmf/common.sh@509 -- # nvme list 00:15:09.868 10:45:26 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:09.868 10:45:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:09.868 10:45:26 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:09.868 10:45:26 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:09.868 10:45:26 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:09.868 10:45:26 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:10.801 10:45:27 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:10.801 10:45:27 -- common/autotest_common.sh@1177 -- # local i=0 00:15:10.801 10:45:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:10.801 10:45:27 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:10.801 10:45:27 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:10.801 10:45:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:12.697 10:45:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:12.697 10:45:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:12.697 10:45:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:12.697 10:45:29 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:12.697 10:45:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:12.697 10:45:29 -- common/autotest_common.sh@1187 -- # return 0 00:15:12.697 10:45:29 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:12.697 10:45:29 -- nvmf/common.sh@510 -- # local dev _ 00:15:12.697 10:45:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.697 10:45:29 -- nvmf/common.sh@509 -- # nvme list 00:15:12.697 10:45:29 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:12.697 10:45:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.697 10:45:29 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:12.697 10:45:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.697 10:45:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:12.697 10:45:29 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:12.697 10:45:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.697 10:45:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:12.697 10:45:29 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:12.697 10:45:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.697 10:45:29 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:12.697 /dev/nvme0n1 ]] 00:15:12.697 10:45:29 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:12.697 10:45:29 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:12.697 10:45:29 -- nvmf/common.sh@510 -- # local dev _ 00:15:12.697 10:45:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.697 10:45:29 -- nvmf/common.sh@509 -- # nvme list 00:15:12.987 10:45:29 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:12.987 10:45:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.987 10:45:29 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:12.987 10:45:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.987 10:45:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:12.987 10:45:29 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:12.987 10:45:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.987 10:45:29 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:12.987 10:45:29 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:12.987 10:45:29 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:12.987 10:45:29 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:12.987 10:45:29 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.245 10:45:29 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.245 10:45:29 -- common/autotest_common.sh@1198 -- # local i=0 00:15:13.245 10:45:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:13.245 10:45:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.245 10:45:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:13.245 10:45:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.245 10:45:29 -- common/autotest_common.sh@1210 -- # return 0 00:15:13.245 10:45:29 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:13.245 10:45:29 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:13.245 10:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.245 10:45:29 -- common/autotest_common.sh@10 -- # set +x 00:15:13.245 10:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.245 10:45:29 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:13.245 10:45:29 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:13.245 10:45:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:13.245 10:45:29 -- nvmf/common.sh@116 -- # sync 00:15:13.245 10:45:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:13.245 10:45:29 -- nvmf/common.sh@119 -- # set +e 00:15:13.245 10:45:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:13.245 10:45:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:13.245 rmmod nvme_tcp 00:15:13.245 rmmod nvme_fabrics 00:15:13.245 rmmod nvme_keyring 00:15:13.245 10:45:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:13.245 10:45:29 -- nvmf/common.sh@123 -- # set -e 00:15:13.245 10:45:30 -- nvmf/common.sh@124 -- # return 0 00:15:13.245 10:45:30 -- nvmf/common.sh@477 -- # '[' -n 3428176 ']' 00:15:13.245 10:45:30 -- nvmf/common.sh@478 -- # killprocess 3428176 00:15:13.245 10:45:30 -- common/autotest_common.sh@926 -- # '[' -z 3428176 ']' 00:15:13.245 10:45:30 -- common/autotest_common.sh@930 -- # kill -0 3428176 00:15:13.245 10:45:30 -- common/autotest_common.sh@931 -- # uname 00:15:13.245 10:45:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:13.245 10:45:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3428176 00:15:13.245 10:45:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:13.245 10:45:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:13.245 10:45:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3428176' 00:15:13.245 killing process with pid 3428176 00:15:13.245 10:45:30 -- common/autotest_common.sh@945 -- # kill 3428176 00:15:13.245 10:45:30 -- common/autotest_common.sh@950 -- # wait 3428176 00:15:13.503 10:45:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:13.503 10:45:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:13.503 10:45:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:13.503 10:45:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.503 10:45:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:13.503 10:45:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.503 10:45:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.503 10:45:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.034 10:45:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:16.034 00:15:16.034 real 0m9.232s 00:15:16.034 user 0m19.622s 00:15:16.034 sys 0m2.226s 00:15:16.034 10:45:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.034 10:45:32 -- common/autotest_common.sh@10 -- # set +x 00:15:16.034 ************************************ 00:15:16.034 END TEST nvmf_nvme_cli 00:15:16.034 ************************************ 00:15:16.034 10:45:32 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:15:16.035 10:45:32 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:16.035 10:45:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:16.035 10:45:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:16.035 10:45:32 -- common/autotest_common.sh@10 -- # set +x 00:15:16.035 ************************************ 00:15:16.035 START TEST nvmf_vfio_user 00:15:16.035 ************************************ 00:15:16.035 10:45:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:16.035 * Looking for test storage... 00:15:16.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.035 10:45:32 -- nvmf/common.sh@7 -- # uname -s 00:15:16.035 10:45:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.035 10:45:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.035 10:45:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.035 10:45:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.035 10:45:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.035 10:45:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.035 10:45:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.035 10:45:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.035 10:45:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.035 10:45:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.035 10:45:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.035 10:45:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.035 10:45:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.035 10:45:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.035 10:45:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.035 10:45:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.035 10:45:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.035 10:45:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.035 10:45:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.035 10:45:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.035 10:45:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.035 10:45:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.035 10:45:32 -- paths/export.sh@5 -- # export PATH 00:15:16.035 10:45:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.035 10:45:32 -- nvmf/common.sh@46 -- # : 0 00:15:16.035 10:45:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:16.035 10:45:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:16.035 10:45:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:16.035 10:45:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.035 10:45:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.035 10:45:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:16.035 10:45:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:16.035 10:45:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3429136 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3429136' 00:15:16.035 Process pid: 3429136 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:16.035 10:45:32 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3429136 00:15:16.035 10:45:32 -- common/autotest_common.sh@819 -- # '[' -z 3429136 ']' 00:15:16.035 10:45:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.035 10:45:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:16.035 10:45:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.035 10:45:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:16.035 10:45:32 -- common/autotest_common.sh@10 -- # set +x 00:15:16.035 [2024-07-10 10:45:32.500122] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:16.035 [2024-07-10 10:45:32.500202] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.035 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.035 [2024-07-10 10:45:32.559232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.035 [2024-07-10 10:45:32.642499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:16.035 [2024-07-10 10:45:32.642647] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.035 [2024-07-10 10:45:32.642666] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.035 [2024-07-10 10:45:32.642680] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.035 [2024-07-10 10:45:32.642770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.035 [2024-07-10 10:45:32.642859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.035 [2024-07-10 10:45:32.642927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.035 [2024-07-10 10:45:32.642925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.969 10:45:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:16.969 10:45:33 -- common/autotest_common.sh@852 -- # return 0 00:15:16.970 10:45:33 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:17.902 10:45:34 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:18.159 10:45:34 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:18.159 10:45:34 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:18.159 10:45:34 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:18.159 10:45:34 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:18.159 10:45:34 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:18.417 Malloc1 00:15:18.417 10:45:35 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:18.674 10:45:35 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:18.932 10:45:35 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:19.189 10:45:35 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.189 10:45:35 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:19.189 10:45:35 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:19.447 Malloc2 00:15:19.447 10:45:36 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:19.704 10:45:36 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:19.963 10:45:36 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:20.221 10:45:36 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:20.221 10:45:36 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:20.221 10:45:36 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.221 10:45:36 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:20.221 10:45:36 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:20.221 10:45:36 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:20.221 [2024-07-10 10:45:36.830131] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:20.221 [2024-07-10 10:45:36.830173] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429699 ] 00:15:20.221 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.221 [2024-07-10 10:45:36.861557] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:20.221 [2024-07-10 10:45:36.870792] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:20.221 [2024-07-10 10:45:36.870818] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbf7a437000 00:15:20.221 [2024-07-10 10:45:36.871788] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.221 [2024-07-10 10:45:36.872781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.221 [2024-07-10 10:45:36.873799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.221 [2024-07-10 10:45:36.874794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.221 [2024-07-10 10:45:36.875798] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.221 [2024-07-10 10:45:36.876802] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.221 [2024-07-10 10:45:36.877807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.221 [2024-07-10 10:45:36.878811] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.221 [2024-07-10 10:45:36.879815] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:20.221 [2024-07-10 10:45:36.879834] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbf791ed000 00:15:20.221 [2024-07-10 10:45:36.880951] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:20.221 [2024-07-10 10:45:36.896056] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:20.221 [2024-07-10 10:45:36.896089] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:20.221 [2024-07-10 10:45:36.900957] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:20.221 [2024-07-10 10:45:36.901005] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:20.221 [2024-07-10 10:45:36.901094] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:20.221 [2024-07-10 10:45:36.901124] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:20.221 [2024-07-10 10:45:36.901134] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:20.221 [2024-07-10 10:45:36.901949] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:20.221 [2024-07-10 10:45:36.901969] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:20.221 [2024-07-10 10:45:36.901987] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:20.221 [2024-07-10 10:45:36.902955] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:20.221 [2024-07-10 10:45:36.902973] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:20.221 [2024-07-10 10:45:36.902986] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:20.221 [2024-07-10 10:45:36.903961] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:20.221 [2024-07-10 10:45:36.903978] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:20.221 [2024-07-10 10:45:36.904968] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:20.221 [2024-07-10 10:45:36.904987] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:20.221 [2024-07-10 10:45:36.904996] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:20.221 [2024-07-10 10:45:36.905007] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:20.221 [2024-07-10 10:45:36.905116] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:20.221 [2024-07-10 10:45:36.905123] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:20.221 [2024-07-10 10:45:36.905132] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:20.221 [2024-07-10 10:45:36.905976] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:20.221 [2024-07-10 10:45:36.906980] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:20.221 [2024-07-10 10:45:36.907988] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:20.221 [2024-07-10 10:45:36.910440] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:20.221 [2024-07-10 10:45:36.911000] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:20.221 [2024-07-10 10:45:36.911017] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:20.222 [2024-07-10 10:45:36.911026] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911049] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:20.222 [2024-07-10 10:45:36.911062] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911082] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.222 [2024-07-10 10:45:36.911091] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.222 [2024-07-10 10:45:36.911112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.911162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.911177] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:20.222 [2024-07-10 10:45:36.911185] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:20.222 [2024-07-10 10:45:36.911192] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:20.222 [2024-07-10 10:45:36.911199] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:20.222 [2024-07-10 10:45:36.911207] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:20.222 [2024-07-10 10:45:36.911214] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:20.222 [2024-07-10 10:45:36.911221] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911237] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.911268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.911287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.222 [2024-07-10 10:45:36.911300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.222 [2024-07-10 10:45:36.911311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.222 [2024-07-10 10:45:36.911322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.222 [2024-07-10 10:45:36.911330] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911344] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911357] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.911368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.911378] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:20.222 [2024-07-10 10:45:36.911386] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911396] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911432] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.911467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.911529] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911544] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911556] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:20.222 [2024-07-10 10:45:36.911564] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:20.222 [2024-07-10 10:45:36.911573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.911591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.911610] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:20.222 [2024-07-10 10:45:36.911628] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911642] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911654] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.222 [2024-07-10 10:45:36.911661] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.222 [2024-07-10 10:45:36.911671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.911691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.911711] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911739] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911751] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.222 [2024-07-10 10:45:36.911758] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.222 [2024-07-10 10:45:36.911768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.911781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.911794] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911805] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911817] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911826] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911834] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911842] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:20.222 [2024-07-10 10:45:36.911852] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:20.222 [2024-07-10 10:45:36.911861] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:20.222 [2024-07-10 10:45:36.911884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.911902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.911920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.911931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.911947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.911958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.911973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.911985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.912001] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:20.222 [2024-07-10 10:45:36.912010] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:20.222 [2024-07-10 10:45:36.912016] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:20.222 [2024-07-10 10:45:36.912021] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:20.222 [2024-07-10 10:45:36.912030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:20.222 [2024-07-10 10:45:36.912041] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:20.222 [2024-07-10 10:45:36.912049] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:20.222 [2024-07-10 10:45:36.912057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.912067] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:20.222 [2024-07-10 10:45:36.912075] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.222 [2024-07-10 10:45:36.912083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.912095] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:20.222 [2024-07-10 10:45:36.912103] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:20.222 [2024-07-10 10:45:36.912111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:20.222 [2024-07-10 10:45:36.912122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.912142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.912156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:20.222 [2024-07-10 10:45:36.912170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:20.222 ===================================================== 00:15:20.222 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:20.222 ===================================================== 00:15:20.222 Controller Capabilities/Features 00:15:20.222 ================================ 00:15:20.222 Vendor ID: 4e58 00:15:20.222 Subsystem Vendor ID: 4e58 00:15:20.222 Serial Number: SPDK1 00:15:20.222 Model Number: SPDK bdev Controller 00:15:20.222 Firmware Version: 24.01.1 00:15:20.222 Recommended Arb Burst: 6 00:15:20.222 IEEE OUI Identifier: 8d 6b 50 00:15:20.222 Multi-path I/O 00:15:20.222 May have multiple subsystem ports: Yes 00:15:20.222 May have multiple controllers: Yes 00:15:20.222 Associated with SR-IOV VF: No 00:15:20.222 Max Data Transfer Size: 131072 00:15:20.222 Max Number of Namespaces: 32 00:15:20.222 Max Number of I/O Queues: 127 00:15:20.222 NVMe Specification Version (VS): 1.3 00:15:20.222 NVMe Specification Version (Identify): 1.3 00:15:20.222 Maximum Queue Entries: 256 00:15:20.222 Contiguous Queues Required: Yes 00:15:20.222 Arbitration Mechanisms Supported 00:15:20.222 Weighted Round Robin: Not Supported 00:15:20.222 Vendor Specific: Not Supported 00:15:20.222 Reset Timeout: 15000 ms 00:15:20.222 Doorbell Stride: 4 bytes 00:15:20.222 NVM Subsystem Reset: Not Supported 00:15:20.222 Command Sets Supported 00:15:20.222 NVM Command Set: Supported 00:15:20.222 Boot Partition: Not Supported 00:15:20.222 Memory Page Size Minimum: 4096 bytes 00:15:20.222 Memory Page Size Maximum: 4096 bytes 00:15:20.222 Persistent Memory Region: Not Supported 00:15:20.222 Optional Asynchronous Events Supported 00:15:20.222 Namespace Attribute Notices: Supported 00:15:20.222 Firmware Activation Notices: Not Supported 00:15:20.222 ANA Change Notices: Not Supported 00:15:20.222 PLE Aggregate Log Change Notices: Not Supported 00:15:20.222 LBA Status Info Alert Notices: Not Supported 00:15:20.222 EGE Aggregate Log Change Notices: Not Supported 00:15:20.222 Normal NVM Subsystem Shutdown event: Not Supported 00:15:20.222 Zone Descriptor Change Notices: Not Supported 00:15:20.222 Discovery Log Change Notices: Not Supported 00:15:20.222 Controller Attributes 00:15:20.222 128-bit Host Identifier: Supported 00:15:20.222 Non-Operational Permissive Mode: Not Supported 00:15:20.222 NVM Sets: Not Supported 00:15:20.222 Read Recovery Levels: Not Supported 00:15:20.222 Endurance Groups: Not Supported 00:15:20.222 Predictable Latency Mode: Not Supported 00:15:20.222 Traffic Based Keep ALive: Not Supported 00:15:20.222 Namespace Granularity: Not Supported 00:15:20.222 SQ Associations: Not Supported 00:15:20.222 UUID List: Not Supported 00:15:20.222 Multi-Domain Subsystem: Not Supported 00:15:20.222 Fixed Capacity Management: Not Supported 00:15:20.222 Variable Capacity Management: Not Supported 00:15:20.222 Delete Endurance Group: Not Supported 00:15:20.222 Delete NVM Set: Not Supported 00:15:20.222 Extended LBA Formats Supported: Not Supported 00:15:20.222 Flexible Data Placement Supported: Not Supported 00:15:20.222 00:15:20.222 Controller Memory Buffer Support 00:15:20.222 ================================ 00:15:20.222 Supported: No 00:15:20.222 00:15:20.222 Persistent Memory Region Support 00:15:20.222 ================================ 00:15:20.222 Supported: No 00:15:20.222 00:15:20.222 Admin Command Set Attributes 00:15:20.222 ============================ 00:15:20.222 Security Send/Receive: Not Supported 00:15:20.222 Format NVM: Not Supported 00:15:20.222 Firmware Activate/Download: Not Supported 00:15:20.222 Namespace Management: Not Supported 00:15:20.222 Device Self-Test: Not Supported 00:15:20.222 Directives: Not Supported 00:15:20.222 NVMe-MI: Not Supported 00:15:20.222 Virtualization Management: Not Supported 00:15:20.222 Doorbell Buffer Config: Not Supported 00:15:20.222 Get LBA Status Capability: Not Supported 00:15:20.222 Command & Feature Lockdown Capability: Not Supported 00:15:20.222 Abort Command Limit: 4 00:15:20.222 Async Event Request Limit: 4 00:15:20.222 Number of Firmware Slots: N/A 00:15:20.222 Firmware Slot 1 Read-Only: N/A 00:15:20.222 Firmware Activation Without Reset: N/A 00:15:20.222 Multiple Update Detection Support: N/A 00:15:20.222 Firmware Update Granularity: No Information Provided 00:15:20.222 Per-Namespace SMART Log: No 00:15:20.222 Asymmetric Namespace Access Log Page: Not Supported 00:15:20.222 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:20.222 Command Effects Log Page: Supported 00:15:20.223 Get Log Page Extended Data: Supported 00:15:20.223 Telemetry Log Pages: Not Supported 00:15:20.223 Persistent Event Log Pages: Not Supported 00:15:20.223 Supported Log Pages Log Page: May Support 00:15:20.223 Commands Supported & Effects Log Page: Not Supported 00:15:20.223 Feature Identifiers & Effects Log Page:May Support 00:15:20.223 NVMe-MI Commands & Effects Log Page: May Support 00:15:20.223 Data Area 4 for Telemetry Log: Not Supported 00:15:20.223 Error Log Page Entries Supported: 128 00:15:20.223 Keep Alive: Supported 00:15:20.223 Keep Alive Granularity: 10000 ms 00:15:20.223 00:15:20.223 NVM Command Set Attributes 00:15:20.223 ========================== 00:15:20.223 Submission Queue Entry Size 00:15:20.223 Max: 64 00:15:20.223 Min: 64 00:15:20.223 Completion Queue Entry Size 00:15:20.223 Max: 16 00:15:20.223 Min: 16 00:15:20.223 Number of Namespaces: 32 00:15:20.223 Compare Command: Supported 00:15:20.223 Write Uncorrectable Command: Not Supported 00:15:20.223 Dataset Management Command: Supported 00:15:20.223 Write Zeroes Command: Supported 00:15:20.223 Set Features Save Field: Not Supported 00:15:20.223 Reservations: Not Supported 00:15:20.223 Timestamp: Not Supported 00:15:20.223 Copy: Supported 00:15:20.223 Volatile Write Cache: Present 00:15:20.223 Atomic Write Unit (Normal): 1 00:15:20.223 Atomic Write Unit (PFail): 1 00:15:20.223 Atomic Compare & Write Unit: 1 00:15:20.223 Fused Compare & Write: Supported 00:15:20.223 Scatter-Gather List 00:15:20.223 SGL Command Set: Supported (Dword aligned) 00:15:20.223 SGL Keyed: Not Supported 00:15:20.223 SGL Bit Bucket Descriptor: Not Supported 00:15:20.223 SGL Metadata Pointer: Not Supported 00:15:20.223 Oversized SGL: Not Supported 00:15:20.223 SGL Metadata Address: Not Supported 00:15:20.223 SGL Offset: Not Supported 00:15:20.223 Transport SGL Data Block: Not Supported 00:15:20.223 Replay Protected Memory Block: Not Supported 00:15:20.223 00:15:20.223 Firmware Slot Information 00:15:20.223 ========================= 00:15:20.223 Active slot: 1 00:15:20.223 Slot 1 Firmware Revision: 24.01.1 00:15:20.223 00:15:20.223 00:15:20.223 Commands Supported and Effects 00:15:20.223 ============================== 00:15:20.223 Admin Commands 00:15:20.223 -------------- 00:15:20.223 Get Log Page (02h): Supported 00:15:20.223 Identify (06h): Supported 00:15:20.223 Abort (08h): Supported 00:15:20.223 Set Features (09h): Supported 00:15:20.223 Get Features (0Ah): Supported 00:15:20.223 Asynchronous Event Request (0Ch): Supported 00:15:20.223 Keep Alive (18h): Supported 00:15:20.223 I/O Commands 00:15:20.223 ------------ 00:15:20.223 Flush (00h): Supported LBA-Change 00:15:20.223 Write (01h): Supported LBA-Change 00:15:20.223 Read (02h): Supported 00:15:20.223 Compare (05h): Supported 00:15:20.223 Write Zeroes (08h): Supported LBA-Change 00:15:20.223 Dataset Management (09h): Supported LBA-Change 00:15:20.223 Copy (19h): Supported LBA-Change 00:15:20.223 Unknown (79h): Supported LBA-Change 00:15:20.223 Unknown (7Ah): Supported 00:15:20.223 00:15:20.223 Error Log 00:15:20.223 ========= 00:15:20.223 00:15:20.223 Arbitration 00:15:20.223 =========== 00:15:20.223 Arbitration Burst: 1 00:15:20.223 00:15:20.223 Power Management 00:15:20.223 ================ 00:15:20.223 Number of Power States: 1 00:15:20.223 Current Power State: Power State #0 00:15:20.223 Power State #0: 00:15:20.223 Max Power: 0.00 W 00:15:20.223 Non-Operational State: Operational 00:15:20.223 Entry Latency: Not Reported 00:15:20.223 Exit Latency: Not Reported 00:15:20.223 Relative Read Throughput: 0 00:15:20.223 Relative Read Latency: 0 00:15:20.223 Relative Write Throughput: 0 00:15:20.223 Relative Write Latency: 0 00:15:20.223 Idle Power: Not Reported 00:15:20.223 Active Power: Not Reported 00:15:20.223 Non-Operational Permissive Mode: Not Supported 00:15:20.223 00:15:20.223 Health Information 00:15:20.223 ================== 00:15:20.223 Critical Warnings: 00:15:20.223 Available Spare Space: OK 00:15:20.223 Temperature: OK 00:15:20.223 Device Reliability: OK 00:15:20.223 Read Only: No 00:15:20.223 Volatile Memory Backup: OK 00:15:20.223 Current Temperature: 0 Kelvin[2024-07-10 10:45:36.912295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:20.223 [2024-07-10 10:45:36.912312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:20.223 [2024-07-10 10:45:36.912349] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:20.223 [2024-07-10 10:45:36.912365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.223 [2024-07-10 10:45:36.912375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.223 [2024-07-10 10:45:36.912385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.223 [2024-07-10 10:45:36.912394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.223 [2024-07-10 10:45:36.915437] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:20.223 [2024-07-10 10:45:36.915459] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:20.223 [2024-07-10 10:45:36.916074] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:20.223 [2024-07-10 10:45:36.916087] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:20.223 [2024-07-10 10:45:36.917047] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:20.223 [2024-07-10 10:45:36.917069] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:20.223 [2024-07-10 10:45:36.917121] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:20.223 [2024-07-10 10:45:36.919089] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:20.223 (-273 Celsius) 00:15:20.223 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:20.223 Available Spare: 0% 00:15:20.223 Available Spare Threshold: 0% 00:15:20.223 Life Percentage Used: 0% 00:15:20.223 Data Units Read: 0 00:15:20.223 Data Units Written: 0 00:15:20.223 Host Read Commands: 0 00:15:20.223 Host Write Commands: 0 00:15:20.223 Controller Busy Time: 0 minutes 00:15:20.223 Power Cycles: 0 00:15:20.223 Power On Hours: 0 hours 00:15:20.223 Unsafe Shutdowns: 0 00:15:20.223 Unrecoverable Media Errors: 0 00:15:20.223 Lifetime Error Log Entries: 0 00:15:20.223 Warning Temperature Time: 0 minutes 00:15:20.223 Critical Temperature Time: 0 minutes 00:15:20.223 00:15:20.223 Number of Queues 00:15:20.223 ================ 00:15:20.223 Number of I/O Submission Queues: 127 00:15:20.223 Number of I/O Completion Queues: 127 00:15:20.223 00:15:20.223 Active Namespaces 00:15:20.223 ================= 00:15:20.223 Namespace ID:1 00:15:20.223 Error Recovery Timeout: Unlimited 00:15:20.223 Command Set Identifier: NVM (00h) 00:15:20.223 Deallocate: Supported 00:15:20.223 Deallocated/Unwritten Error: Not Supported 00:15:20.223 Deallocated Read Value: Unknown 00:15:20.223 Deallocate in Write Zeroes: Not Supported 00:15:20.223 Deallocated Guard Field: 0xFFFF 00:15:20.223 Flush: Supported 00:15:20.223 Reservation: Supported 00:15:20.223 Namespace Sharing Capabilities: Multiple Controllers 00:15:20.223 Size (in LBAs): 131072 (0GiB) 00:15:20.223 Capacity (in LBAs): 131072 (0GiB) 00:15:20.223 Utilization (in LBAs): 131072 (0GiB) 00:15:20.223 NGUID: 490954030A6346D79E9E531A34472611 00:15:20.223 UUID: 49095403-0a63-46d7-9e9e-531a34472611 00:15:20.223 Thin Provisioning: Not Supported 00:15:20.223 Per-NS Atomic Units: Yes 00:15:20.223 Atomic Boundary Size (Normal): 0 00:15:20.223 Atomic Boundary Size (PFail): 0 00:15:20.223 Atomic Boundary Offset: 0 00:15:20.223 Maximum Single Source Range Length: 65535 00:15:20.223 Maximum Copy Length: 65535 00:15:20.223 Maximum Source Range Count: 1 00:15:20.223 NGUID/EUI64 Never Reused: No 00:15:20.223 Namespace Write Protected: No 00:15:20.223 Number of LBA Formats: 1 00:15:20.223 Current LBA Format: LBA Format #00 00:15:20.223 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:20.223 00:15:20.223 10:45:36 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:20.223 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.484 Initializing NVMe Controllers 00:15:25.484 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:25.484 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:25.484 Initialization complete. Launching workers. 00:15:25.484 ======================================================== 00:15:25.484 Latency(us) 00:15:25.484 Device Information : IOPS MiB/s Average min max 00:15:25.484 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 36895.92 144.12 3468.51 1159.04 7632.88 00:15:25.484 ======================================================== 00:15:25.484 Total : 36895.92 144.12 3468.51 1159.04 7632.88 00:15:25.484 00:15:25.485 10:45:42 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:25.485 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.762 Initializing NVMe Controllers 00:15:30.762 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:30.762 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:30.762 Initialization complete. Launching workers. 00:15:30.762 ======================================================== 00:15:30.762 Latency(us) 00:15:30.762 Device Information : IOPS MiB/s Average min max 00:15:30.762 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.19 62.70 7984.44 6967.43 11992.08 00:15:30.762 ======================================================== 00:15:30.762 Total : 16051.19 62.70 7984.44 6967.43 11992.08 00:15:30.762 00:15:30.762 10:45:47 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:30.762 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.025 Initializing NVMe Controllers 00:15:36.025 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:36.025 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:36.025 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:36.025 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:36.025 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:36.025 Initialization complete. Launching workers. 00:15:36.025 Starting thread on core 2 00:15:36.025 Starting thread on core 3 00:15:36.025 Starting thread on core 1 00:15:36.025 10:45:52 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:36.025 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.207 Initializing NVMe Controllers 00:15:40.207 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.207 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.207 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:40.207 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:40.207 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:40.207 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:40.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:40.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:40.207 Initialization complete. Launching workers. 00:15:40.207 Starting thread on core 1 with urgent priority queue 00:15:40.207 Starting thread on core 2 with urgent priority queue 00:15:40.207 Starting thread on core 3 with urgent priority queue 00:15:40.207 Starting thread on core 0 with urgent priority queue 00:15:40.207 SPDK bdev Controller (SPDK1 ) core 0: 5381.67 IO/s 18.58 secs/100000 ios 00:15:40.207 SPDK bdev Controller (SPDK1 ) core 1: 5349.00 IO/s 18.70 secs/100000 ios 00:15:40.207 SPDK bdev Controller (SPDK1 ) core 2: 5243.33 IO/s 19.07 secs/100000 ios 00:15:40.207 SPDK bdev Controller (SPDK1 ) core 3: 5466.33 IO/s 18.29 secs/100000 ios 00:15:40.207 ======================================================== 00:15:40.207 00:15:40.207 10:45:56 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:40.207 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.207 Initializing NVMe Controllers 00:15:40.207 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.207 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.207 Namespace ID: 1 size: 0GB 00:15:40.207 Initialization complete. 00:15:40.207 INFO: using host memory buffer for IO 00:15:40.207 Hello world! 00:15:40.207 10:45:56 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:40.207 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.140 Initializing NVMe Controllers 00:15:41.140 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.140 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.140 Initialization complete. Launching workers. 00:15:41.140 submit (in ns) avg, min, max = 6783.0, 3433.3, 5015341.1 00:15:41.140 complete (in ns) avg, min, max = 25982.8, 2032.2, 4999373.3 00:15:41.140 00:15:41.140 Submit histogram 00:15:41.140 ================ 00:15:41.140 Range in us Cumulative Count 00:15:41.140 3.413 - 3.437: 0.0142% ( 2) 00:15:41.140 3.437 - 3.461: 0.2347% ( 31) 00:15:41.140 3.461 - 3.484: 1.0456% ( 114) 00:15:41.140 3.484 - 3.508: 3.1154% ( 291) 00:15:41.140 3.508 - 3.532: 8.1585% ( 709) 00:15:41.140 3.532 - 3.556: 15.5488% ( 1039) 00:15:41.140 3.556 - 3.579: 25.4712% ( 1395) 00:15:41.140 3.579 - 3.603: 34.9314% ( 1330) 00:15:41.140 3.603 - 3.627: 43.5308% ( 1209) 00:15:41.140 3.627 - 3.650: 51.4830% ( 1118) 00:15:41.140 3.650 - 3.674: 57.2231% ( 807) 00:15:41.140 3.674 - 3.698: 62.5080% ( 743) 00:15:41.140 3.698 - 3.721: 66.7828% ( 601) 00:15:41.140 3.721 - 3.745: 69.8414% ( 430) 00:15:41.140 3.745 - 3.769: 72.8430% ( 422) 00:15:41.140 3.769 - 3.793: 76.0083% ( 445) 00:15:41.140 3.793 - 3.816: 79.5718% ( 501) 00:15:41.140 3.816 - 3.840: 82.8793% ( 465) 00:15:41.140 3.840 - 3.864: 85.6391% ( 388) 00:15:41.140 3.864 - 3.887: 87.8797% ( 315) 00:15:41.140 3.887 - 3.911: 89.6792% ( 253) 00:15:41.140 3.911 - 3.935: 91.2654% ( 223) 00:15:41.140 3.935 - 3.959: 92.4319% ( 164) 00:15:41.140 3.959 - 3.982: 93.1859% ( 106) 00:15:41.140 3.982 - 4.006: 94.0465% ( 121) 00:15:41.140 4.006 - 4.030: 94.6725% ( 88) 00:15:41.140 4.030 - 4.053: 95.1846% ( 72) 00:15:41.140 4.053 - 4.077: 95.6611% ( 67) 00:15:41.140 4.077 - 4.101: 96.0310% ( 52) 00:15:41.140 4.101 - 4.124: 96.3084% ( 39) 00:15:41.140 4.124 - 4.148: 96.4791% ( 24) 00:15:41.140 4.148 - 4.172: 96.6214% ( 20) 00:15:41.140 4.172 - 4.196: 96.6854% ( 9) 00:15:41.140 4.196 - 4.219: 96.7779% ( 13) 00:15:41.140 4.219 - 4.243: 96.8988% ( 17) 00:15:41.140 4.243 - 4.267: 96.9699% ( 10) 00:15:41.140 4.267 - 4.290: 97.1051% ( 19) 00:15:41.140 4.290 - 4.314: 97.1620% ( 8) 00:15:41.140 4.314 - 4.338: 97.2260% ( 9) 00:15:41.140 4.338 - 4.361: 97.2473% ( 3) 00:15:41.140 4.361 - 4.385: 97.2544% ( 1) 00:15:41.140 4.385 - 4.409: 97.2687% ( 2) 00:15:41.140 4.409 - 4.433: 97.2758% ( 1) 00:15:41.140 4.433 - 4.456: 97.2971% ( 3) 00:15:41.140 4.456 - 4.480: 97.3042% ( 1) 00:15:41.140 4.480 - 4.504: 97.3113% ( 1) 00:15:41.140 4.504 - 4.527: 97.3256% ( 2) 00:15:41.140 4.551 - 4.575: 97.3469% ( 3) 00:15:41.140 4.575 - 4.599: 97.3540% ( 1) 00:15:41.140 4.599 - 4.622: 97.3682% ( 2) 00:15:41.140 4.622 - 4.646: 97.3825% ( 2) 00:15:41.140 4.646 - 4.670: 97.4251% ( 6) 00:15:41.140 4.670 - 4.693: 97.4394% ( 2) 00:15:41.140 4.693 - 4.717: 97.5105% ( 10) 00:15:41.140 4.717 - 4.741: 97.5532% ( 6) 00:15:41.140 4.741 - 4.764: 97.6385% ( 12) 00:15:41.140 4.764 - 4.788: 97.6883% ( 7) 00:15:41.140 4.788 - 4.812: 97.7168% ( 4) 00:15:41.140 4.812 - 4.836: 97.7666% ( 7) 00:15:41.140 4.836 - 4.859: 97.8163% ( 7) 00:15:41.140 4.859 - 4.883: 97.8448% ( 4) 00:15:41.140 4.883 - 4.907: 97.8661% ( 3) 00:15:41.140 4.907 - 4.930: 97.9230% ( 8) 00:15:41.140 4.954 - 4.978: 97.9444% ( 3) 00:15:41.140 4.978 - 5.001: 97.9657% ( 3) 00:15:41.140 5.001 - 5.025: 98.0368% ( 10) 00:15:41.140 5.025 - 5.049: 98.0440% ( 1) 00:15:41.140 5.049 - 5.073: 98.0866% ( 6) 00:15:41.140 5.073 - 5.096: 98.1151% ( 4) 00:15:41.140 5.096 - 5.120: 98.1293% ( 2) 00:15:41.140 5.120 - 5.144: 98.1364% ( 1) 00:15:41.140 5.167 - 5.191: 98.1720% ( 5) 00:15:41.140 5.191 - 5.215: 98.1791% ( 1) 00:15:41.140 5.215 - 5.239: 98.2076% ( 4) 00:15:41.140 5.239 - 5.262: 98.2218% ( 2) 00:15:41.140 5.262 - 5.286: 98.2289% ( 1) 00:15:41.140 5.310 - 5.333: 98.2360% ( 1) 00:15:41.140 5.381 - 5.404: 98.2431% ( 1) 00:15:41.140 5.404 - 5.428: 98.2645% ( 3) 00:15:41.140 5.476 - 5.499: 98.2716% ( 1) 00:15:41.140 5.499 - 5.523: 98.2858% ( 2) 00:15:41.140 5.547 - 5.570: 98.2929% ( 1) 00:15:41.140 5.641 - 5.665: 98.3000% ( 1) 00:15:41.140 5.689 - 5.713: 98.3071% ( 1) 00:15:41.140 5.713 - 5.736: 98.3142% ( 1) 00:15:41.140 5.736 - 5.760: 98.3214% ( 1) 00:15:41.140 5.784 - 5.807: 98.3427% ( 3) 00:15:41.140 5.807 - 5.831: 98.3498% ( 1) 00:15:41.140 5.879 - 5.902: 98.3712% ( 3) 00:15:41.140 5.950 - 5.973: 98.3854% ( 2) 00:15:41.140 5.973 - 5.997: 98.3925% ( 1) 00:15:41.140 6.044 - 6.068: 98.4067% ( 2) 00:15:41.140 6.116 - 6.163: 98.4138% ( 1) 00:15:41.140 6.210 - 6.258: 98.4209% ( 1) 00:15:41.140 6.305 - 6.353: 98.4281% ( 1) 00:15:41.140 6.495 - 6.542: 98.4352% ( 1) 00:15:41.140 6.542 - 6.590: 98.4423% ( 1) 00:15:41.140 6.637 - 6.684: 98.4494% ( 1) 00:15:41.140 6.732 - 6.779: 98.4636% ( 2) 00:15:41.140 6.921 - 6.969: 98.4778% ( 2) 00:15:41.140 6.969 - 7.016: 98.4921% ( 2) 00:15:41.140 7.253 - 7.301: 98.4992% ( 1) 00:15:41.141 7.301 - 7.348: 98.5134% ( 2) 00:15:41.141 7.348 - 7.396: 98.5205% ( 1) 00:15:41.141 7.396 - 7.443: 98.5347% ( 2) 00:15:41.141 7.443 - 7.490: 98.5419% ( 1) 00:15:41.141 7.633 - 7.680: 98.5490% ( 1) 00:15:41.141 7.680 - 7.727: 98.5632% ( 2) 00:15:41.141 7.775 - 7.822: 98.5774% ( 2) 00:15:41.141 7.822 - 7.870: 98.5916% ( 2) 00:15:41.141 7.870 - 7.917: 98.5988% ( 1) 00:15:41.141 7.917 - 7.964: 98.6059% ( 1) 00:15:41.141 7.964 - 8.012: 98.6272% ( 3) 00:15:41.141 8.012 - 8.059: 98.6343% ( 1) 00:15:41.141 8.059 - 8.107: 98.6486% ( 2) 00:15:41.141 8.154 - 8.201: 98.6557% ( 1) 00:15:41.141 8.201 - 8.249: 98.6628% ( 1) 00:15:41.141 8.344 - 8.391: 98.6699% ( 1) 00:15:41.141 8.676 - 8.723: 98.6841% ( 2) 00:15:41.141 8.818 - 8.865: 98.6912% ( 1) 00:15:41.141 8.865 - 8.913: 98.6983% ( 1) 00:15:41.141 9.007 - 9.055: 98.7126% ( 2) 00:15:41.141 9.055 - 9.102: 98.7197% ( 1) 00:15:41.141 9.102 - 9.150: 98.7268% ( 1) 00:15:41.141 9.150 - 9.197: 98.7339% ( 1) 00:15:41.141 9.197 - 9.244: 98.7410% ( 1) 00:15:41.141 9.244 - 9.292: 98.7481% ( 1) 00:15:41.141 9.719 - 9.766: 98.7552% ( 1) 00:15:41.141 9.766 - 9.813: 98.7695% ( 2) 00:15:41.141 9.956 - 10.003: 98.7837% ( 2) 00:15:41.141 10.003 - 10.050: 98.7908% ( 1) 00:15:41.141 10.193 - 10.240: 98.7979% ( 1) 00:15:41.141 10.287 - 10.335: 98.8050% ( 1) 00:15:41.141 10.430 - 10.477: 98.8121% ( 1) 00:15:41.141 10.477 - 10.524: 98.8193% ( 1) 00:15:41.141 10.524 - 10.572: 98.8335% ( 2) 00:15:41.141 10.667 - 10.714: 98.8406% ( 1) 00:15:41.141 11.046 - 11.093: 98.8477% ( 1) 00:15:41.141 11.188 - 11.236: 98.8619% ( 2) 00:15:41.141 11.283 - 11.330: 98.8691% ( 1) 00:15:41.141 11.378 - 11.425: 98.8762% ( 1) 00:15:41.141 11.473 - 11.520: 98.8904% ( 2) 00:15:41.141 11.567 - 11.615: 98.9117% ( 3) 00:15:41.141 11.662 - 11.710: 98.9188% ( 1) 00:15:41.141 11.804 - 11.852: 98.9260% ( 1) 00:15:41.141 11.994 - 12.041: 98.9331% ( 1) 00:15:41.141 12.041 - 12.089: 98.9402% ( 1) 00:15:41.141 12.231 - 12.326: 98.9544% ( 2) 00:15:41.141 12.326 - 12.421: 98.9615% ( 1) 00:15:41.141 12.421 - 12.516: 98.9686% ( 1) 00:15:41.141 12.610 - 12.705: 98.9757% ( 1) 00:15:41.141 12.705 - 12.800: 98.9829% ( 1) 00:15:41.141 12.800 - 12.895: 98.9900% ( 1) 00:15:41.141 12.895 - 12.990: 98.9971% ( 1) 00:15:41.141 12.990 - 13.084: 99.0113% ( 2) 00:15:41.141 13.084 - 13.179: 99.0184% ( 1) 00:15:41.141 13.274 - 13.369: 99.0255% ( 1) 00:15:41.141 13.559 - 13.653: 99.0398% ( 2) 00:15:41.141 13.653 - 13.748: 99.0469% ( 1) 00:15:41.141 13.843 - 13.938: 99.0611% ( 2) 00:15:41.141 13.938 - 14.033: 99.0682% ( 1) 00:15:41.141 14.033 - 14.127: 99.0753% ( 1) 00:15:41.141 14.222 - 14.317: 99.0824% ( 1) 00:15:41.141 14.696 - 14.791: 99.0967% ( 2) 00:15:41.141 15.644 - 15.739: 99.1038% ( 1) 00:15:41.141 17.067 - 17.161: 99.1180% ( 2) 00:15:41.141 17.161 - 17.256: 99.1251% ( 1) 00:15:41.141 17.256 - 17.351: 99.1322% ( 1) 00:15:41.141 17.351 - 17.446: 99.1536% ( 3) 00:15:41.141 17.446 - 17.541: 99.1678% ( 2) 00:15:41.141 17.541 - 17.636: 99.2176% ( 7) 00:15:41.141 17.636 - 17.730: 99.2247% ( 1) 00:15:41.141 17.730 - 17.825: 99.2603% ( 5) 00:15:41.141 17.825 - 17.920: 99.3101% ( 7) 00:15:41.141 17.920 - 18.015: 99.3741% ( 9) 00:15:41.141 18.015 - 18.110: 99.4025% ( 4) 00:15:41.141 18.110 - 18.204: 99.4452% ( 6) 00:15:41.141 18.204 - 18.299: 99.5092% ( 9) 00:15:41.141 18.299 - 18.394: 99.5875% ( 11) 00:15:41.141 18.394 - 18.489: 99.6444% ( 8) 00:15:41.141 18.489 - 18.584: 99.6728% ( 4) 00:15:41.141 18.584 - 18.679: 99.7013% ( 4) 00:15:41.141 18.679 - 18.773: 99.7084% ( 1) 00:15:41.141 18.773 - 18.868: 99.7368% ( 4) 00:15:41.141 18.868 - 18.963: 99.7582% ( 3) 00:15:41.141 19.153 - 19.247: 99.7653% ( 1) 00:15:41.141 19.247 - 19.342: 99.7724% ( 1) 00:15:41.141 19.342 - 19.437: 99.8008% ( 4) 00:15:41.141 19.437 - 19.532: 99.8222% ( 3) 00:15:41.141 19.532 - 19.627: 99.8364% ( 2) 00:15:41.141 19.816 - 19.911: 99.8435% ( 1) 00:15:41.141 19.911 - 20.006: 99.8506% ( 1) 00:15:41.141 20.006 - 20.101: 99.8577% ( 1) 00:15:41.141 20.101 - 20.196: 99.8649% ( 1) 00:15:41.141 20.290 - 20.385: 99.8720% ( 1) 00:15:41.141 20.480 - 20.575: 99.8791% ( 1) 00:15:41.141 22.187 - 22.281: 99.8862% ( 1) 00:15:41.141 22.661 - 22.756: 99.8933% ( 1) 00:15:41.141 23.893 - 23.988: 99.9004% ( 1) 00:15:41.141 27.117 - 27.307: 99.9146% ( 2) 00:15:41.141 27.307 - 27.496: 99.9218% ( 1) 00:15:41.141 33.564 - 33.754: 99.9289% ( 1) 00:15:41.141 3980.705 - 4004.978: 99.9858% ( 8) 00:15:41.141 4004.978 - 4029.250: 99.9929% ( 1) 00:15:41.141 5000.154 - 5024.427: 100.0000% ( 1) 00:15:41.141 00:15:41.141 Complete histogram 00:15:41.141 ================== 00:15:41.141 Range in us Cumulative Count 00:15:41.141 2.027 - 2.039: 0.6117% ( 86) 00:15:41.141 2.039 - 2.050: 17.5902% ( 2387) 00:15:41.141 2.050 - 2.062: 25.4286% ( 1102) 00:15:41.141 2.062 - 2.074: 33.2741% ( 1103) 00:15:41.141 2.074 - 2.086: 57.2587% ( 3372) 00:15:41.141 2.086 - 2.098: 62.1097% ( 682) 00:15:41.141 2.098 - 2.110: 64.6774% ( 361) 00:15:41.141 2.110 - 2.121: 71.3422% ( 937) 00:15:41.141 2.121 - 2.133: 72.5941% ( 176) 00:15:41.141 2.133 - 2.145: 78.7325% ( 863) 00:15:41.141 2.145 - 2.157: 87.3248% ( 1208) 00:15:41.141 2.157 - 2.169: 88.8186% ( 210) 00:15:41.141 2.169 - 2.181: 90.8030% ( 279) 00:15:41.141 2.181 - 2.193: 92.2114% ( 198) 00:15:41.141 2.193 - 2.204: 92.7520% ( 76) 00:15:41.141 2.204 - 2.216: 94.3026% ( 218) 00:15:41.141 2.216 - 2.228: 95.2415% ( 132) 00:15:41.141 2.228 - 2.240: 95.4691% ( 32) 00:15:41.141 2.240 - 2.252: 95.6469% ( 25) 00:15:41.141 2.252 - 2.264: 95.7252% ( 11) 00:15:41.141 2.264 - 2.276: 95.7678% ( 6) 00:15:41.141 2.276 - 2.287: 95.9599% ( 27) 00:15:41.141 2.287 - 2.299: 96.0737% ( 16) 00:15:41.141 2.299 - 2.311: 96.1804% ( 15) 00:15:41.141 2.311 - 2.323: 96.3867% ( 29) 00:15:41.141 2.323 - 2.335: 96.6356% ( 35) 00:15:41.141 2.335 - 2.347: 96.8703% ( 33) 00:15:41.142 2.347 - 2.359: 97.1620% ( 41) 00:15:41.142 2.359 - 2.370: 97.4322% ( 38) 00:15:41.142 2.370 - 2.382: 97.6030% ( 24) 00:15:41.142 2.382 - 2.394: 97.7950% ( 27) 00:15:41.142 2.394 - 2.406: 97.9159% ( 17) 00:15:41.142 2.406 - 2.418: 97.9942% ( 11) 00:15:41.142 2.418 - 2.430: 98.0368% ( 6) 00:15:41.142 2.430 - 2.441: 98.0582% ( 3) 00:15:41.142 2.441 - 2.453: 98.0937% ( 5) 00:15:41.142 2.453 - 2.465: 98.1364% ( 6) 00:15:41.142 2.465 - 2.477: 98.1507% ( 2) 00:15:41.142 2.489 - 2.501: 98.1649% ( 2) 00:15:41.142 2.501 - 2.513: 98.1720% ( 1) 00:15:41.142 2.536 - 2.548: 98.1791% ( 1) 00:15:41.142 2.548 - 2.560: 98.1862% ( 1) 00:15:41.142 2.584 - 2.596: 98.1933% ( 1) 00:15:41.142 2.596 - 2.607: 98.2076% ( 2) 00:15:41.142 2.607 - 2.619: 98.2218% ( 2) 00:15:41.142 2.631 - 2.643: 98.2289% ( 1) 00:15:41.142 2.655 - 2.667: 98.2360% ( 1) 00:15:41.142 2.714 - 2.726: 98.2431% ( 1) 00:15:41.142 2.761 - 2.773: 98.2502% ( 1) 00:15:41.142 2.785 - 2.797: 98.2573% ( 1) 00:15:41.142 2.856 - 2.868: 98.2716% ( 2) 00:15:41.142 2.868 - 2.880: 98.2787% ( 1) 00:15:41.142 2.927 - 2.939: 98.2929% ( 2) 00:15:41.142 2.939 - 2.951: 98.3000% ( 1) 00:15:41.142 2.999 - 3.010: 98.3071% ( 1) 00:15:41.142 3.034 - 3.058: 98.3427% ( 5) 00:15:41.142 3.058 - 3.081: 98.3569% ( 2) 00:15:41.142 3.081 - 3.105: 98.3712% ( 2) 00:15:41.142 3.105 - 3.129: 98.3783% ( 1) 00:15:41.142 3.153 - 3.176: 98.3854% ( 1) 00:15:41.142 3.176 - 3.200: 98.3925% ( 1) 00:15:41.142 3.200 - 3.224: 98.4138% ( 3) 00:15:41.142 3.224 - 3.247: 98.4423% ( 4) 00:15:41.142 3.271 - 3.295: 98.4494% ( 1) 00:15:41.142 3.295 - 3.319: 98.4707% ( 3) 00:15:41.142 3.319 - 3.342: 98.4921% ( 3) 00:15:41.142 3.342 - 3.366: 98.5063% ( 2) 00:15:41.142 3.366 - 3.390: 98.5276% ( 3) 00:15:41.142 3.390 - 3.413: 98.5419% ( 2) 00:15:41.142 3.413 - 3.437: 98.5490% ( 1) 00:15:41.142 3.437 - 3.461: 98.5561% ( 1) 00:15:41.142 3.461 - 3.484: 98.5774% ( 3) 00:15:41.142 3.484 - 3.508: 98.5845% ( 1) 00:15:41.142 3.508 - 3.532: 98.5916% ( 1) 00:15:41.142 3.532 - 3.556: 98.5988% ( 1) 00:15:41.142 3.556 - 3.579: 98.6059% ( 1) 00:15:41.142 3.603 - 3.627: 98.6130% ( 1) 00:15:41.142 3.650 - 3.674: 98.6201% ( 1) 00:15:41.142 3.674 - 3.698: 98.6414% ( 3) 00:15:41.142 3.721 - 3.745: 98.6486% ( 1) 00:15:41.142 3.745 - 3.769: 98.6557% ( 1) 00:15:41.142 3.769 - 3.793: 98.6628% ( 1) 00:15:41.142 4.101 - 4.124: 98.6699% ( 1) 00:15:41.142 4.290 - 4.314: 98.6770% ( 1) 00:15:41.142 4.385 - 4.409: 98.6841% ( 1) 00:15:41.142 4.812 - 4.836: 98.6912% ( 1) 00:15:41.142 4.930 - 4.954: 98.6983% ( 1) 00:15:41.142 5.073 - 5.096: 98.7055% ( 1) 00:15:41.142 5.167 - 5.191: 98.7126% ( 1) 00:15:41.142 5.357 - 5.381: 98.7197% ( 1) 00:15:41.142 5.476 - 5.499: 98.7339% ( 2) 00:15:41.142 5.499 - 5.523: 98.7410% ( 1) 00:15:41.142 5.523 - 5.547: 98.7481% ( 1) 00:15:41.142 5.641 - 5.665: 98.7552% ( 1) 00:15:41.142 5.713 - 5.736: 98.7695% ( 2) 00:15:41.142 5.879 - 5.902: 98.7766% ( 1) 00:15:41.142 5.902 - 5.926: 98.7837% ( 1) 00:15:41.142 5.973 - 5.997: 98.7908% ( 1) 00:15:41.142 5.997 - 6.021: 98.7979% ( 1) 00:15:41.142 6.258 - 6.305: 98.8050% ( 1) 00:15:41.142 6.400 - 6.447: 98.8121% ( 1) 00:15:41.142 6.684 - 6.732: 98.8193% ( 1) 00:15:41.142 6.779 - 6.827: 98.8335% ( 2) 00:15:41.142 8.486 - 8.533: 98.8406% ( 1) 00:15:41.142 12.089 - 12.136: 98.8477% ( 1) 00:15:41.142 15.550 - 15.644: 98.8548% ( 1) 00:15:41.142 15.644 - 15.739: 98.8904% ( 5) 00:15:41.142 15.739 - 15.834: 98.9402% ( 7) 00:15:41.142 15.929 - 16.024: 98.9615% ( 3) 00:15:41.142 16.024 - 16.119: 98.9900% ( 4) 00:15:41.142 16.119 - 16.213: 99.0398% ( 7) 00:15:41.142 16.213 - 16.308: 99.0967% ( 8) 00:15:41.142 16.308 - 16.403: 99.1180% ( 3) 00:15:41.142 16.403 - 16.498: 99.1962% ( 11) 00:15:41.142 16.498 - 16.593: 99.2176% ( 3) 00:15:41.142 16.593 - 16.687: 99.2603% ( 6) 00:15:41.142 16.687 - 16.782: 99.3029% ( 6) 00:15:41.142 16.782 - 16.877: 99.3172% ( 2) 00:15:41.142 16.877 - 16.972: 99.3385% ( 3) 00:15:41.142 17.161 - 17.256: 99.3527% ( 2) 00:15:41.142 17.256 - 17.351: 99.3598% ( 1) 00:15:41.142 17.351 - 17.446: 99.3670% ( 1) 00:15:41.142 17.446 - 17.541: 99.3812% ( 2) 00:15:41.142 17.541 - 17.636: 99.3954% ( 2) 00:15:41.142 17.920 - 18.015: 99.4025% ( 1) 00:15:41.142 2997.665 - 3009.801: 99.4096% ( 1) 00:15:41.142 3021.938 - 3034.074: 99.4167% ( 1) 00:15:41.142 3034.074 - 3046.210: 99.4239% ( 1) 00:15:41.142 3058.347 - 3070.483: 99.4310% ( 1) 00:15:41.142 3470.981 - 3495.253: 99.4381% ( 1) 00:15:41.142 3980.705 - 4004.978: 99.9289% ( 69) 00:15:41.142 4004.978 - 4029.250: 99.9787% ( 7) 00:15:41.142 4975.881 - 5000.154: 100.0000% ( 3) 00:15:41.142 00:15:41.142 10:45:57 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:41.142 10:45:57 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:41.142 10:45:57 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:41.142 10:45:57 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:41.142 10:45:57 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:41.400 [2024-07-10 10:45:58.024462] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:41.400 [ 00:15:41.400 { 00:15:41.400 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:41.400 "subtype": "Discovery", 00:15:41.400 "listen_addresses": [], 00:15:41.400 "allow_any_host": true, 00:15:41.400 "hosts": [] 00:15:41.400 }, 00:15:41.400 { 00:15:41.400 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:41.400 "subtype": "NVMe", 00:15:41.400 "listen_addresses": [ 00:15:41.400 { 00:15:41.400 "transport": "VFIOUSER", 00:15:41.400 "trtype": "VFIOUSER", 00:15:41.400 "adrfam": "IPv4", 00:15:41.400 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:41.400 "trsvcid": "0" 00:15:41.400 } 00:15:41.400 ], 00:15:41.400 "allow_any_host": true, 00:15:41.400 "hosts": [], 00:15:41.400 "serial_number": "SPDK1", 00:15:41.400 "model_number": "SPDK bdev Controller", 00:15:41.400 "max_namespaces": 32, 00:15:41.400 "min_cntlid": 1, 00:15:41.400 "max_cntlid": 65519, 00:15:41.400 "namespaces": [ 00:15:41.400 { 00:15:41.400 "nsid": 1, 00:15:41.400 "bdev_name": "Malloc1", 00:15:41.400 "name": "Malloc1", 00:15:41.400 "nguid": "490954030A6346D79E9E531A34472611", 00:15:41.400 "uuid": "49095403-0a63-46d7-9e9e-531a34472611" 00:15:41.400 } 00:15:41.400 ] 00:15:41.400 }, 00:15:41.400 { 00:15:41.400 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:41.400 "subtype": "NVMe", 00:15:41.400 "listen_addresses": [ 00:15:41.400 { 00:15:41.400 "transport": "VFIOUSER", 00:15:41.400 "trtype": "VFIOUSER", 00:15:41.400 "adrfam": "IPv4", 00:15:41.400 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:41.400 "trsvcid": "0" 00:15:41.400 } 00:15:41.400 ], 00:15:41.400 "allow_any_host": true, 00:15:41.400 "hosts": [], 00:15:41.400 "serial_number": "SPDK2", 00:15:41.400 "model_number": "SPDK bdev Controller", 00:15:41.400 "max_namespaces": 32, 00:15:41.400 "min_cntlid": 1, 00:15:41.400 "max_cntlid": 65519, 00:15:41.400 "namespaces": [ 00:15:41.400 { 00:15:41.400 "nsid": 1, 00:15:41.400 "bdev_name": "Malloc2", 00:15:41.400 "name": "Malloc2", 00:15:41.400 "nguid": "89DA9F3C2AFD4918B0D6137F4643A58C", 00:15:41.400 "uuid": "89da9f3c-2afd-4918-b0d6-137f4643a58c" 00:15:41.400 } 00:15:41.400 ] 00:15:41.400 } 00:15:41.400 ] 00:15:41.400 10:45:58 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:41.400 10:45:58 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3432289 00:15:41.400 10:45:58 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:41.400 10:45:58 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:41.400 10:45:58 -- common/autotest_common.sh@1244 -- # local i=0 00:15:41.400 10:45:58 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:41.400 10:45:58 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:41.400 10:45:58 -- common/autotest_common.sh@1255 -- # return 0 00:15:41.400 10:45:58 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:41.400 10:45:58 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:41.400 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.661 Malloc3 00:15:41.661 10:45:58 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:41.937 10:45:58 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:41.937 Asynchronous Event Request test 00:15:41.937 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.937 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.937 Registering asynchronous event callbacks... 00:15:41.937 Starting namespace attribute notice tests for all controllers... 00:15:41.937 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:41.937 aer_cb - Changed Namespace 00:15:41.937 Cleaning up... 00:15:42.227 [ 00:15:42.227 { 00:15:42.227 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.227 "subtype": "Discovery", 00:15:42.227 "listen_addresses": [], 00:15:42.227 "allow_any_host": true, 00:15:42.227 "hosts": [] 00:15:42.227 }, 00:15:42.227 { 00:15:42.227 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.227 "subtype": "NVMe", 00:15:42.227 "listen_addresses": [ 00:15:42.227 { 00:15:42.227 "transport": "VFIOUSER", 00:15:42.227 "trtype": "VFIOUSER", 00:15:42.227 "adrfam": "IPv4", 00:15:42.227 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.227 "trsvcid": "0" 00:15:42.227 } 00:15:42.227 ], 00:15:42.227 "allow_any_host": true, 00:15:42.227 "hosts": [], 00:15:42.227 "serial_number": "SPDK1", 00:15:42.227 "model_number": "SPDK bdev Controller", 00:15:42.227 "max_namespaces": 32, 00:15:42.227 "min_cntlid": 1, 00:15:42.227 "max_cntlid": 65519, 00:15:42.227 "namespaces": [ 00:15:42.227 { 00:15:42.227 "nsid": 1, 00:15:42.227 "bdev_name": "Malloc1", 00:15:42.227 "name": "Malloc1", 00:15:42.227 "nguid": "490954030A6346D79E9E531A34472611", 00:15:42.227 "uuid": "49095403-0a63-46d7-9e9e-531a34472611" 00:15:42.227 }, 00:15:42.227 { 00:15:42.227 "nsid": 2, 00:15:42.227 "bdev_name": "Malloc3", 00:15:42.227 "name": "Malloc3", 00:15:42.227 "nguid": "439C359100F945F48DD502F80F8530B6", 00:15:42.227 "uuid": "439c3591-00f9-45f4-8dd5-02f80f8530b6" 00:15:42.227 } 00:15:42.227 ] 00:15:42.227 }, 00:15:42.227 { 00:15:42.227 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.227 "subtype": "NVMe", 00:15:42.227 "listen_addresses": [ 00:15:42.227 { 00:15:42.227 "transport": "VFIOUSER", 00:15:42.227 "trtype": "VFIOUSER", 00:15:42.227 "adrfam": "IPv4", 00:15:42.227 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.227 "trsvcid": "0" 00:15:42.227 } 00:15:42.227 ], 00:15:42.227 "allow_any_host": true, 00:15:42.227 "hosts": [], 00:15:42.227 "serial_number": "SPDK2", 00:15:42.227 "model_number": "SPDK bdev Controller", 00:15:42.227 "max_namespaces": 32, 00:15:42.227 "min_cntlid": 1, 00:15:42.227 "max_cntlid": 65519, 00:15:42.227 "namespaces": [ 00:15:42.227 { 00:15:42.227 "nsid": 1, 00:15:42.227 "bdev_name": "Malloc2", 00:15:42.227 "name": "Malloc2", 00:15:42.227 "nguid": "89DA9F3C2AFD4918B0D6137F4643A58C", 00:15:42.227 "uuid": "89da9f3c-2afd-4918-b0d6-137f4643a58c" 00:15:42.227 } 00:15:42.227 ] 00:15:42.227 } 00:15:42.227 ] 00:15:42.227 10:45:58 -- target/nvmf_vfio_user.sh@44 -- # wait 3432289 00:15:42.227 10:45:58 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:42.227 10:45:58 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:42.228 10:45:58 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:42.228 10:45:58 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:42.228 [2024-07-10 10:45:58.819611] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:42.228 [2024-07-10 10:45:58.819652] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432427 ] 00:15:42.228 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.228 [2024-07-10 10:45:58.854588] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:42.228 [2024-07-10 10:45:58.860670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:42.228 [2024-07-10 10:45:58.860699] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa00f6eb000 00:15:42.228 [2024-07-10 10:45:58.861677] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.228 [2024-07-10 10:45:58.862680] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.228 [2024-07-10 10:45:58.863688] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.228 [2024-07-10 10:45:58.864692] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.228 [2024-07-10 10:45:58.865718] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.228 [2024-07-10 10:45:58.866710] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.228 [2024-07-10 10:45:58.867731] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.228 [2024-07-10 10:45:58.868737] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.228 [2024-07-10 10:45:58.869733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:42.228 [2024-07-10 10:45:58.869771] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa00e4a1000 00:15:42.228 [2024-07-10 10:45:58.870911] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:42.228 [2024-07-10 10:45:58.888633] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:42.228 [2024-07-10 10:45:58.888667] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:42.228 [2024-07-10 10:45:58.890771] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:42.228 [2024-07-10 10:45:58.890824] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:42.228 [2024-07-10 10:45:58.890909] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:42.228 [2024-07-10 10:45:58.890935] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:42.228 [2024-07-10 10:45:58.890945] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:42.228 [2024-07-10 10:45:58.891780] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:42.228 [2024-07-10 10:45:58.891801] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:42.228 [2024-07-10 10:45:58.891813] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:42.228 [2024-07-10 10:45:58.892783] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:42.228 [2024-07-10 10:45:58.892803] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:42.228 [2024-07-10 10:45:58.892816] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:42.228 [2024-07-10 10:45:58.893786] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:42.228 [2024-07-10 10:45:58.893805] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:42.228 [2024-07-10 10:45:58.894790] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:42.228 [2024-07-10 10:45:58.894809] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:42.228 [2024-07-10 10:45:58.894818] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:42.228 [2024-07-10 10:45:58.894829] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:42.228 [2024-07-10 10:45:58.894938] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:42.228 [2024-07-10 10:45:58.894946] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:42.228 [2024-07-10 10:45:58.894954] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:42.228 [2024-07-10 10:45:58.895818] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:42.228 [2024-07-10 10:45:58.896805] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:42.228 [2024-07-10 10:45:58.897817] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:42.228 [2024-07-10 10:45:58.898853] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:42.228 [2024-07-10 10:45:58.899833] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:42.228 [2024-07-10 10:45:58.899852] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:42.228 [2024-07-10 10:45:58.899861] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:42.228 [2024-07-10 10:45:58.899884] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:42.228 [2024-07-10 10:45:58.899897] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:42.228 [2024-07-10 10:45:58.899919] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.228 [2024-07-10 10:45:58.899929] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.228 [2024-07-10 10:45:58.899946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.228 [2024-07-10 10:45:58.906452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:42.228 [2024-07-10 10:45:58.906475] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:42.228 [2024-07-10 10:45:58.906483] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:42.228 [2024-07-10 10:45:58.906490] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:42.228 [2024-07-10 10:45:58.906498] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:42.228 [2024-07-10 10:45:58.906506] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:42.228 [2024-07-10 10:45:58.906513] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:42.228 [2024-07-10 10:45:58.906521] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:42.228 [2024-07-10 10:45:58.906538] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:42.228 [2024-07-10 10:45:58.906555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:42.228 [2024-07-10 10:45:58.914437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:42.228 [2024-07-10 10:45:58.914465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.228 [2024-07-10 10:45:58.914479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.228 [2024-07-10 10:45:58.914491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.228 [2024-07-10 10:45:58.914503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.228 [2024-07-10 10:45:58.914511] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:42.228 [2024-07-10 10:45:58.914525] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:42.228 [2024-07-10 10:45:58.914540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:42.228 [2024-07-10 10:45:58.922435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:42.228 [2024-07-10 10:45:58.922453] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:42.228 [2024-07-10 10:45:58.922462] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:42.228 [2024-07-10 10:45:58.922474] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:42.228 [2024-07-10 10:45:58.922488] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:42.228 [2024-07-10 10:45:58.922505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:42.228 [2024-07-10 10:45:58.930437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:42.228 [2024-07-10 10:45:58.930506] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:42.228 [2024-07-10 10:45:58.930521] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:42.228 [2024-07-10 10:45:58.930533] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:42.228 [2024-07-10 10:45:58.930542] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:42.228 [2024-07-10 10:45:58.930551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:42.229 [2024-07-10 10:45:58.938435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:42.229 [2024-07-10 10:45:58.938463] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:42.229 [2024-07-10 10:45:58.938481] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:42.229 [2024-07-10 10:45:58.938495] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:42.229 [2024-07-10 10:45:58.938507] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.229 [2024-07-10 10:45:58.938515] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.229 [2024-07-10 10:45:58.938525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.229 [2024-07-10 10:45:58.946452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:42.229 [2024-07-10 10:45:58.946479] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:42.229 [2024-07-10 10:45:58.946494] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:42.229 [2024-07-10 10:45:58.946507] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.229 [2024-07-10 10:45:58.946515] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.229 [2024-07-10 10:45:58.946525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.229 [2024-07-10 10:45:58.954449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:42.229 [2024-07-10 10:45:58.954470] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:42.229 [2024-07-10 10:45:58.954483] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:42.229 [2024-07-10 10:45:58.954497] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:42.229 [2024-07-10 10:45:58.954507] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:42.229 [2024-07-10 10:45:58.954518] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:42.229 [2024-07-10 10:45:58.954527] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:42.229 [2024-07-10 10:45:58.954535] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:42.229 [2024-07-10 10:45:58.954543] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:42.229 [2024-07-10 10:45:58.954568] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:42.229 [2024-07-10 10:45:58.962437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:42.229 [2024-07-10 10:45:58.962464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:42.229 [2024-07-10 10:45:58.970434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:42.229 [2024-07-10 10:45:58.970459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:42.229 [2024-07-10 10:45:58.978449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:42.229 [2024-07-10 10:45:58.978474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:42.229 [2024-07-10 10:45:58.986448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:42.229 [2024-07-10 10:45:58.986485] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:42.229 [2024-07-10 10:45:58.986496] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:42.229 [2024-07-10 10:45:58.986502] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:42.229 [2024-07-10 10:45:58.986508] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:42.229 [2024-07-10 10:45:58.986518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:42.229 [2024-07-10 10:45:58.986529] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:42.229 [2024-07-10 10:45:58.986537] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:42.229 [2024-07-10 10:45:58.986546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:42.229 [2024-07-10 10:45:58.986556] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:42.229 [2024-07-10 10:45:58.986564] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.229 [2024-07-10 10:45:58.986572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.229 [2024-07-10 10:45:58.986583] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:42.229 [2024-07-10 10:45:58.986591] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:42.229 [2024-07-10 10:45:58.986600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:42.229 [2024-07-10 10:45:58.994452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:42.229 [2024-07-10 10:45:58.994483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:42.229 [2024-07-10 10:45:58.994499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:42.229 [2024-07-10 10:45:58.994511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:42.229 ===================================================== 00:15:42.229 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:42.229 ===================================================== 00:15:42.229 Controller Capabilities/Features 00:15:42.229 ================================ 00:15:42.229 Vendor ID: 4e58 00:15:42.229 Subsystem Vendor ID: 4e58 00:15:42.229 Serial Number: SPDK2 00:15:42.229 Model Number: SPDK bdev Controller 00:15:42.229 Firmware Version: 24.01.1 00:15:42.229 Recommended Arb Burst: 6 00:15:42.229 IEEE OUI Identifier: 8d 6b 50 00:15:42.229 Multi-path I/O 00:15:42.229 May have multiple subsystem ports: Yes 00:15:42.229 May have multiple controllers: Yes 00:15:42.229 Associated with SR-IOV VF: No 00:15:42.229 Max Data Transfer Size: 131072 00:15:42.229 Max Number of Namespaces: 32 00:15:42.229 Max Number of I/O Queues: 127 00:15:42.229 NVMe Specification Version (VS): 1.3 00:15:42.229 NVMe Specification Version (Identify): 1.3 00:15:42.229 Maximum Queue Entries: 256 00:15:42.229 Contiguous Queues Required: Yes 00:15:42.229 Arbitration Mechanisms Supported 00:15:42.229 Weighted Round Robin: Not Supported 00:15:42.229 Vendor Specific: Not Supported 00:15:42.229 Reset Timeout: 15000 ms 00:15:42.229 Doorbell Stride: 4 bytes 00:15:42.229 NVM Subsystem Reset: Not Supported 00:15:42.229 Command Sets Supported 00:15:42.229 NVM Command Set: Supported 00:15:42.229 Boot Partition: Not Supported 00:15:42.229 Memory Page Size Minimum: 4096 bytes 00:15:42.229 Memory Page Size Maximum: 4096 bytes 00:15:42.229 Persistent Memory Region: Not Supported 00:15:42.229 Optional Asynchronous Events Supported 00:15:42.229 Namespace Attribute Notices: Supported 00:15:42.229 Firmware Activation Notices: Not Supported 00:15:42.229 ANA Change Notices: Not Supported 00:15:42.229 PLE Aggregate Log Change Notices: Not Supported 00:15:42.229 LBA Status Info Alert Notices: Not Supported 00:15:42.229 EGE Aggregate Log Change Notices: Not Supported 00:15:42.229 Normal NVM Subsystem Shutdown event: Not Supported 00:15:42.229 Zone Descriptor Change Notices: Not Supported 00:15:42.229 Discovery Log Change Notices: Not Supported 00:15:42.229 Controller Attributes 00:15:42.229 128-bit Host Identifier: Supported 00:15:42.229 Non-Operational Permissive Mode: Not Supported 00:15:42.229 NVM Sets: Not Supported 00:15:42.229 Read Recovery Levels: Not Supported 00:15:42.229 Endurance Groups: Not Supported 00:15:42.229 Predictable Latency Mode: Not Supported 00:15:42.229 Traffic Based Keep ALive: Not Supported 00:15:42.229 Namespace Granularity: Not Supported 00:15:42.229 SQ Associations: Not Supported 00:15:42.229 UUID List: Not Supported 00:15:42.229 Multi-Domain Subsystem: Not Supported 00:15:42.229 Fixed Capacity Management: Not Supported 00:15:42.229 Variable Capacity Management: Not Supported 00:15:42.229 Delete Endurance Group: Not Supported 00:15:42.229 Delete NVM Set: Not Supported 00:15:42.229 Extended LBA Formats Supported: Not Supported 00:15:42.229 Flexible Data Placement Supported: Not Supported 00:15:42.229 00:15:42.229 Controller Memory Buffer Support 00:15:42.229 ================================ 00:15:42.229 Supported: No 00:15:42.229 00:15:42.229 Persistent Memory Region Support 00:15:42.229 ================================ 00:15:42.229 Supported: No 00:15:42.229 00:15:42.229 Admin Command Set Attributes 00:15:42.229 ============================ 00:15:42.229 Security Send/Receive: Not Supported 00:15:42.229 Format NVM: Not Supported 00:15:42.229 Firmware Activate/Download: Not Supported 00:15:42.229 Namespace Management: Not Supported 00:15:42.229 Device Self-Test: Not Supported 00:15:42.229 Directives: Not Supported 00:15:42.229 NVMe-MI: Not Supported 00:15:42.229 Virtualization Management: Not Supported 00:15:42.229 Doorbell Buffer Config: Not Supported 00:15:42.229 Get LBA Status Capability: Not Supported 00:15:42.229 Command & Feature Lockdown Capability: Not Supported 00:15:42.229 Abort Command Limit: 4 00:15:42.229 Async Event Request Limit: 4 00:15:42.229 Number of Firmware Slots: N/A 00:15:42.230 Firmware Slot 1 Read-Only: N/A 00:15:42.230 Firmware Activation Without Reset: N/A 00:15:42.230 Multiple Update Detection Support: N/A 00:15:42.230 Firmware Update Granularity: No Information Provided 00:15:42.230 Per-Namespace SMART Log: No 00:15:42.230 Asymmetric Namespace Access Log Page: Not Supported 00:15:42.230 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:42.230 Command Effects Log Page: Supported 00:15:42.230 Get Log Page Extended Data: Supported 00:15:42.230 Telemetry Log Pages: Not Supported 00:15:42.230 Persistent Event Log Pages: Not Supported 00:15:42.230 Supported Log Pages Log Page: May Support 00:15:42.230 Commands Supported & Effects Log Page: Not Supported 00:15:42.230 Feature Identifiers & Effects Log Page:May Support 00:15:42.230 NVMe-MI Commands & Effects Log Page: May Support 00:15:42.230 Data Area 4 for Telemetry Log: Not Supported 00:15:42.230 Error Log Page Entries Supported: 128 00:15:42.230 Keep Alive: Supported 00:15:42.230 Keep Alive Granularity: 10000 ms 00:15:42.230 00:15:42.230 NVM Command Set Attributes 00:15:42.230 ========================== 00:15:42.230 Submission Queue Entry Size 00:15:42.230 Max: 64 00:15:42.230 Min: 64 00:15:42.230 Completion Queue Entry Size 00:15:42.230 Max: 16 00:15:42.230 Min: 16 00:15:42.230 Number of Namespaces: 32 00:15:42.230 Compare Command: Supported 00:15:42.230 Write Uncorrectable Command: Not Supported 00:15:42.230 Dataset Management Command: Supported 00:15:42.230 Write Zeroes Command: Supported 00:15:42.230 Set Features Save Field: Not Supported 00:15:42.230 Reservations: Not Supported 00:15:42.230 Timestamp: Not Supported 00:15:42.230 Copy: Supported 00:15:42.230 Volatile Write Cache: Present 00:15:42.230 Atomic Write Unit (Normal): 1 00:15:42.230 Atomic Write Unit (PFail): 1 00:15:42.230 Atomic Compare & Write Unit: 1 00:15:42.230 Fused Compare & Write: Supported 00:15:42.230 Scatter-Gather List 00:15:42.230 SGL Command Set: Supported (Dword aligned) 00:15:42.230 SGL Keyed: Not Supported 00:15:42.230 SGL Bit Bucket Descriptor: Not Supported 00:15:42.230 SGL Metadata Pointer: Not Supported 00:15:42.230 Oversized SGL: Not Supported 00:15:42.230 SGL Metadata Address: Not Supported 00:15:42.230 SGL Offset: Not Supported 00:15:42.230 Transport SGL Data Block: Not Supported 00:15:42.230 Replay Protected Memory Block: Not Supported 00:15:42.230 00:15:42.230 Firmware Slot Information 00:15:42.230 ========================= 00:15:42.230 Active slot: 1 00:15:42.230 Slot 1 Firmware Revision: 24.01.1 00:15:42.230 00:15:42.230 00:15:42.230 Commands Supported and Effects 00:15:42.230 ============================== 00:15:42.230 Admin Commands 00:15:42.230 -------------- 00:15:42.230 Get Log Page (02h): Supported 00:15:42.230 Identify (06h): Supported 00:15:42.230 Abort (08h): Supported 00:15:42.230 Set Features (09h): Supported 00:15:42.230 Get Features (0Ah): Supported 00:15:42.230 Asynchronous Event Request (0Ch): Supported 00:15:42.230 Keep Alive (18h): Supported 00:15:42.230 I/O Commands 00:15:42.230 ------------ 00:15:42.230 Flush (00h): Supported LBA-Change 00:15:42.230 Write (01h): Supported LBA-Change 00:15:42.230 Read (02h): Supported 00:15:42.230 Compare (05h): Supported 00:15:42.230 Write Zeroes (08h): Supported LBA-Change 00:15:42.230 Dataset Management (09h): Supported LBA-Change 00:15:42.230 Copy (19h): Supported LBA-Change 00:15:42.230 Unknown (79h): Supported LBA-Change 00:15:42.230 Unknown (7Ah): Supported 00:15:42.230 00:15:42.230 Error Log 00:15:42.230 ========= 00:15:42.230 00:15:42.230 Arbitration 00:15:42.230 =========== 00:15:42.230 Arbitration Burst: 1 00:15:42.230 00:15:42.230 Power Management 00:15:42.230 ================ 00:15:42.230 Number of Power States: 1 00:15:42.230 Current Power State: Power State #0 00:15:42.230 Power State #0: 00:15:42.230 Max Power: 0.00 W 00:15:42.230 Non-Operational State: Operational 00:15:42.230 Entry Latency: Not Reported 00:15:42.230 Exit Latency: Not Reported 00:15:42.230 Relative Read Throughput: 0 00:15:42.230 Relative Read Latency: 0 00:15:42.230 Relative Write Throughput: 0 00:15:42.230 Relative Write Latency: 0 00:15:42.230 Idle Power: Not Reported 00:15:42.230 Active Power: Not Reported 00:15:42.230 Non-Operational Permissive Mode: Not Supported 00:15:42.230 00:15:42.230 Health Information 00:15:42.230 ================== 00:15:42.230 Critical Warnings: 00:15:42.230 Available Spare Space: OK 00:15:42.230 Temperature: OK 00:15:42.230 Device Reliability: OK 00:15:42.230 Read Only: No 00:15:42.230 Volatile Memory Backup: OK 00:15:42.230 Current Temperature: 0 Kelvin[2024-07-10 10:45:58.994635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:42.230 [2024-07-10 10:45:59.002435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:42.230 [2024-07-10 10:45:59.002480] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:42.230 [2024-07-10 10:45:59.002497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.230 [2024-07-10 10:45:59.002508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.230 [2024-07-10 10:45:59.002517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.230 [2024-07-10 10:45:59.002527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.230 [2024-07-10 10:45:59.002603] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:42.230 [2024-07-10 10:45:59.002624] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:42.230 [2024-07-10 10:45:59.003649] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:42.230 [2024-07-10 10:45:59.003664] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:42.230 [2024-07-10 10:45:59.004611] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:42.230 [2024-07-10 10:45:59.004635] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:42.230 [2024-07-10 10:45:59.004686] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:42.230 [2024-07-10 10:45:59.007436] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:42.488 (-273 Celsius) 00:15:42.488 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:42.488 Available Spare: 0% 00:15:42.488 Available Spare Threshold: 0% 00:15:42.488 Life Percentage Used: 0% 00:15:42.488 Data Units Read: 0 00:15:42.488 Data Units Written: 0 00:15:42.488 Host Read Commands: 0 00:15:42.488 Host Write Commands: 0 00:15:42.488 Controller Busy Time: 0 minutes 00:15:42.488 Power Cycles: 0 00:15:42.488 Power On Hours: 0 hours 00:15:42.488 Unsafe Shutdowns: 0 00:15:42.488 Unrecoverable Media Errors: 0 00:15:42.488 Lifetime Error Log Entries: 0 00:15:42.488 Warning Temperature Time: 0 minutes 00:15:42.488 Critical Temperature Time: 0 minutes 00:15:42.488 00:15:42.488 Number of Queues 00:15:42.488 ================ 00:15:42.488 Number of I/O Submission Queues: 127 00:15:42.488 Number of I/O Completion Queues: 127 00:15:42.488 00:15:42.488 Active Namespaces 00:15:42.488 ================= 00:15:42.488 Namespace ID:1 00:15:42.488 Error Recovery Timeout: Unlimited 00:15:42.488 Command Set Identifier: NVM (00h) 00:15:42.488 Deallocate: Supported 00:15:42.488 Deallocated/Unwritten Error: Not Supported 00:15:42.488 Deallocated Read Value: Unknown 00:15:42.488 Deallocate in Write Zeroes: Not Supported 00:15:42.488 Deallocated Guard Field: 0xFFFF 00:15:42.488 Flush: Supported 00:15:42.488 Reservation: Supported 00:15:42.488 Namespace Sharing Capabilities: Multiple Controllers 00:15:42.488 Size (in LBAs): 131072 (0GiB) 00:15:42.488 Capacity (in LBAs): 131072 (0GiB) 00:15:42.488 Utilization (in LBAs): 131072 (0GiB) 00:15:42.488 NGUID: 89DA9F3C2AFD4918B0D6137F4643A58C 00:15:42.488 UUID: 89da9f3c-2afd-4918-b0d6-137f4643a58c 00:15:42.488 Thin Provisioning: Not Supported 00:15:42.488 Per-NS Atomic Units: Yes 00:15:42.488 Atomic Boundary Size (Normal): 0 00:15:42.488 Atomic Boundary Size (PFail): 0 00:15:42.488 Atomic Boundary Offset: 0 00:15:42.488 Maximum Single Source Range Length: 65535 00:15:42.488 Maximum Copy Length: 65535 00:15:42.488 Maximum Source Range Count: 1 00:15:42.488 NGUID/EUI64 Never Reused: No 00:15:42.488 Namespace Write Protected: No 00:15:42.488 Number of LBA Formats: 1 00:15:42.488 Current LBA Format: LBA Format #00 00:15:42.488 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:42.488 00:15:42.488 10:45:59 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:42.488 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.746 Initializing NVMe Controllers 00:15:47.746 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:47.746 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:47.746 Initialization complete. Launching workers. 00:15:47.746 ======================================================== 00:15:47.746 Latency(us) 00:15:47.746 Device Information : IOPS MiB/s Average min max 00:15:47.746 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37552.15 146.69 3407.90 1158.60 7645.73 00:15:47.746 ======================================================== 00:15:47.746 Total : 37552.15 146.69 3407.90 1158.60 7645.73 00:15:47.746 00:15:47.746 10:46:04 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:47.746 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.004 Initializing NVMe Controllers 00:15:53.004 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:53.004 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:53.004 Initialization complete. Launching workers. 00:15:53.004 ======================================================== 00:15:53.004 Latency(us) 00:15:53.004 Device Information : IOPS MiB/s Average min max 00:15:53.004 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36256.40 141.63 3530.08 1151.25 9757.69 00:15:53.004 ======================================================== 00:15:53.004 Total : 36256.40 141.63 3530.08 1151.25 9757.69 00:15:53.004 00:15:53.004 10:46:09 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:53.004 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.267 Initializing NVMe Controllers 00:15:58.267 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:58.267 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:58.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:58.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:58.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:58.267 Initialization complete. Launching workers. 00:15:58.267 Starting thread on core 2 00:15:58.267 Starting thread on core 3 00:15:58.267 Starting thread on core 1 00:15:58.267 10:46:14 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:58.267 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.586 Initializing NVMe Controllers 00:16:01.586 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:01.586 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:01.586 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:01.586 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:01.586 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:01.586 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:01.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:01.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:01.586 Initialization complete. Launching workers. 00:16:01.586 Starting thread on core 1 with urgent priority queue 00:16:01.586 Starting thread on core 2 with urgent priority queue 00:16:01.586 Starting thread on core 3 with urgent priority queue 00:16:01.586 Starting thread on core 0 with urgent priority queue 00:16:01.586 SPDK bdev Controller (SPDK2 ) core 0: 5473.67 IO/s 18.27 secs/100000 ios 00:16:01.586 SPDK bdev Controller (SPDK2 ) core 1: 5438.33 IO/s 18.39 secs/100000 ios 00:16:01.586 SPDK bdev Controller (SPDK2 ) core 2: 5746.67 IO/s 17.40 secs/100000 ios 00:16:01.586 SPDK bdev Controller (SPDK2 ) core 3: 5544.00 IO/s 18.04 secs/100000 ios 00:16:01.586 ======================================================== 00:16:01.586 00:16:01.586 10:46:18 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:01.843 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.100 Initializing NVMe Controllers 00:16:02.100 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.100 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.100 Namespace ID: 1 size: 0GB 00:16:02.100 Initialization complete. 00:16:02.100 INFO: using host memory buffer for IO 00:16:02.100 Hello world! 00:16:02.100 10:46:18 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:02.100 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.471 Initializing NVMe Controllers 00:16:03.471 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.471 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:03.471 Initialization complete. Launching workers. 00:16:03.471 submit (in ns) avg, min, max = 6478.6, 3470.0, 4015906.7 00:16:03.471 complete (in ns) avg, min, max = 26302.3, 2042.2, 5994748.9 00:16:03.471 00:16:03.471 Submit histogram 00:16:03.471 ================ 00:16:03.471 Range in us Cumulative Count 00:16:03.471 3.461 - 3.484: 0.3559% ( 49) 00:16:03.471 3.484 - 3.508: 1.0387% ( 94) 00:16:03.471 3.508 - 3.532: 3.2542% ( 305) 00:16:03.471 3.532 - 3.556: 7.2855% ( 555) 00:16:03.472 3.556 - 3.579: 15.1958% ( 1089) 00:16:03.472 3.579 - 3.603: 23.9195% ( 1201) 00:16:03.472 3.603 - 3.627: 34.3067% ( 1430) 00:16:03.472 3.627 - 3.650: 42.6164% ( 1144) 00:16:03.472 3.650 - 3.674: 49.6986% ( 975) 00:16:03.472 3.674 - 3.698: 54.6161% ( 677) 00:16:03.472 3.698 - 3.721: 59.8024% ( 714) 00:16:03.472 3.721 - 3.745: 62.9839% ( 438) 00:16:03.472 3.745 - 3.769: 65.9839% ( 413) 00:16:03.472 3.769 - 3.793: 69.1509% ( 436) 00:16:03.472 3.793 - 3.816: 72.8554% ( 510) 00:16:03.472 3.816 - 3.840: 76.5744% ( 512) 00:16:03.472 3.840 - 3.864: 80.5404% ( 546) 00:16:03.472 3.864 - 3.887: 83.7583% ( 443) 00:16:03.472 3.887 - 3.911: 86.2643% ( 345) 00:16:03.472 3.911 - 3.935: 88.0584% ( 247) 00:16:03.472 3.935 - 3.959: 89.4676% ( 194) 00:16:03.472 3.959 - 3.982: 90.6879% ( 168) 00:16:03.472 3.982 - 4.006: 91.6903% ( 138) 00:16:03.472 4.006 - 4.030: 92.4239% ( 101) 00:16:03.472 4.030 - 4.053: 93.3028% ( 121) 00:16:03.472 4.053 - 4.077: 94.1164% ( 112) 00:16:03.472 4.077 - 4.101: 94.7265% ( 84) 00:16:03.472 4.101 - 4.124: 95.1478% ( 58) 00:16:03.472 4.124 - 4.148: 95.4892% ( 47) 00:16:03.472 4.148 - 4.172: 95.7507% ( 36) 00:16:03.472 4.172 - 4.196: 95.9323% ( 25) 00:16:03.472 4.196 - 4.219: 96.0776% ( 20) 00:16:03.472 4.219 - 4.243: 96.2301% ( 21) 00:16:03.472 4.243 - 4.267: 96.3681% ( 19) 00:16:03.472 4.267 - 4.290: 96.4553% ( 12) 00:16:03.472 4.290 - 4.314: 96.5570% ( 14) 00:16:03.472 4.314 - 4.338: 96.6224% ( 9) 00:16:03.472 4.338 - 4.361: 96.6587% ( 5) 00:16:03.472 4.361 - 4.385: 96.6877% ( 4) 00:16:03.472 4.385 - 4.409: 96.7168% ( 4) 00:16:03.472 4.409 - 4.433: 96.7458% ( 4) 00:16:03.472 4.433 - 4.456: 96.7822% ( 5) 00:16:03.472 4.456 - 4.480: 96.7967% ( 2) 00:16:03.472 4.480 - 4.504: 96.8403% ( 6) 00:16:03.472 4.504 - 4.527: 96.8475% ( 1) 00:16:03.472 4.527 - 4.551: 96.8621% ( 2) 00:16:03.472 4.551 - 4.575: 96.8766% ( 2) 00:16:03.472 4.599 - 4.622: 96.8911% ( 2) 00:16:03.472 4.622 - 4.646: 96.8984% ( 1) 00:16:03.472 4.646 - 4.670: 96.9056% ( 1) 00:16:03.472 4.693 - 4.717: 96.9129% ( 1) 00:16:03.472 4.717 - 4.741: 96.9347% ( 3) 00:16:03.472 4.741 - 4.764: 96.9638% ( 4) 00:16:03.472 4.764 - 4.788: 96.9783% ( 2) 00:16:03.472 4.788 - 4.812: 97.0219% ( 6) 00:16:03.472 4.812 - 4.836: 97.0582% ( 5) 00:16:03.472 4.836 - 4.859: 97.1381% ( 11) 00:16:03.472 4.859 - 4.883: 97.1962% ( 8) 00:16:03.472 4.883 - 4.907: 97.2398% ( 6) 00:16:03.472 4.907 - 4.930: 97.2906% ( 7) 00:16:03.472 4.930 - 4.954: 97.3342% ( 6) 00:16:03.472 4.954 - 4.978: 97.3851% ( 7) 00:16:03.472 4.978 - 5.001: 97.4650% ( 11) 00:16:03.472 5.001 - 5.025: 97.5158% ( 7) 00:16:03.472 5.025 - 5.049: 97.5594% ( 6) 00:16:03.472 5.049 - 5.073: 97.6175% ( 8) 00:16:03.472 5.073 - 5.096: 97.6538% ( 5) 00:16:03.472 5.120 - 5.144: 97.6829% ( 4) 00:16:03.472 5.144 - 5.167: 97.7119% ( 4) 00:16:03.472 5.167 - 5.191: 97.7410% ( 4) 00:16:03.472 5.191 - 5.215: 97.7700% ( 4) 00:16:03.472 5.215 - 5.239: 97.7918% ( 3) 00:16:03.472 5.239 - 5.262: 97.8063% ( 2) 00:16:03.472 5.262 - 5.286: 97.8354% ( 4) 00:16:03.472 5.286 - 5.310: 97.8427% ( 1) 00:16:03.472 5.333 - 5.357: 97.8645% ( 3) 00:16:03.472 5.404 - 5.428: 97.8862% ( 3) 00:16:03.472 5.428 - 5.452: 97.8935% ( 1) 00:16:03.472 5.452 - 5.476: 97.9008% ( 1) 00:16:03.472 5.499 - 5.523: 97.9080% ( 1) 00:16:03.472 5.523 - 5.547: 97.9226% ( 2) 00:16:03.472 5.547 - 5.570: 97.9444% ( 3) 00:16:03.472 5.570 - 5.594: 97.9516% ( 1) 00:16:03.472 5.665 - 5.689: 97.9589% ( 1) 00:16:03.472 5.713 - 5.736: 97.9662% ( 1) 00:16:03.472 5.784 - 5.807: 97.9807% ( 2) 00:16:03.472 5.807 - 5.831: 97.9879% ( 1) 00:16:03.472 5.831 - 5.855: 97.9952% ( 1) 00:16:03.472 5.855 - 5.879: 98.0025% ( 1) 00:16:03.472 5.902 - 5.926: 98.0315% ( 4) 00:16:03.472 5.926 - 5.950: 98.0388% ( 1) 00:16:03.472 6.021 - 6.044: 98.0461% ( 1) 00:16:03.472 6.044 - 6.068: 98.0533% ( 1) 00:16:03.472 6.068 - 6.116: 98.0606% ( 1) 00:16:03.472 6.116 - 6.163: 98.0751% ( 2) 00:16:03.472 6.305 - 6.353: 98.0824% ( 1) 00:16:03.472 6.353 - 6.400: 98.0896% ( 1) 00:16:03.472 6.447 - 6.495: 98.1114% ( 3) 00:16:03.472 6.542 - 6.590: 98.1260% ( 2) 00:16:03.472 6.732 - 6.779: 98.1405% ( 2) 00:16:03.472 6.779 - 6.827: 98.1477% ( 1) 00:16:03.472 6.827 - 6.874: 98.1623% ( 2) 00:16:03.472 6.874 - 6.921: 98.1695% ( 1) 00:16:03.472 6.969 - 7.016: 98.1768% ( 1) 00:16:03.472 7.064 - 7.111: 98.1841% ( 1) 00:16:03.472 7.111 - 7.159: 98.1913% ( 1) 00:16:03.472 7.159 - 7.206: 98.1986% ( 1) 00:16:03.472 7.206 - 7.253: 98.2059% ( 1) 00:16:03.472 7.253 - 7.301: 98.2204% ( 2) 00:16:03.472 7.301 - 7.348: 98.2422% ( 3) 00:16:03.472 7.396 - 7.443: 98.2494% ( 1) 00:16:03.472 7.443 - 7.490: 98.2567% ( 1) 00:16:03.472 7.490 - 7.538: 98.2712% ( 2) 00:16:03.472 7.538 - 7.585: 98.2930% ( 3) 00:16:03.472 7.585 - 7.633: 98.3075% ( 2) 00:16:03.472 7.680 - 7.727: 98.3293% ( 3) 00:16:03.472 7.727 - 7.775: 98.3439% ( 2) 00:16:03.472 7.775 - 7.822: 98.3511% ( 1) 00:16:03.472 7.822 - 7.870: 98.3657% ( 2) 00:16:03.472 7.870 - 7.917: 98.3802% ( 2) 00:16:03.472 7.917 - 7.964: 98.3947% ( 2) 00:16:03.472 7.964 - 8.012: 98.4020% ( 1) 00:16:03.472 8.012 - 8.059: 98.4092% ( 1) 00:16:03.472 8.059 - 8.107: 98.4165% ( 1) 00:16:03.472 8.154 - 8.201: 98.4238% ( 1) 00:16:03.472 8.201 - 8.249: 98.4383% ( 2) 00:16:03.472 8.249 - 8.296: 98.4456% ( 1) 00:16:03.472 8.344 - 8.391: 98.4528% ( 1) 00:16:03.472 8.486 - 8.533: 98.4601% ( 1) 00:16:03.472 8.676 - 8.723: 98.4746% ( 2) 00:16:03.472 8.770 - 8.818: 98.4819% ( 1) 00:16:03.472 8.865 - 8.913: 98.4891% ( 1) 00:16:03.472 8.913 - 8.960: 98.5109% ( 3) 00:16:03.472 8.960 - 9.007: 98.5182% ( 1) 00:16:03.472 9.007 - 9.055: 98.5255% ( 1) 00:16:03.472 9.150 - 9.197: 98.5327% ( 1) 00:16:03.472 9.292 - 9.339: 98.5400% ( 1) 00:16:03.472 9.387 - 9.434: 98.5473% ( 1) 00:16:03.472 9.624 - 9.671: 98.5545% ( 1) 00:16:03.472 9.719 - 9.766: 98.5618% ( 1) 00:16:03.472 9.908 - 9.956: 98.5690% ( 1) 00:16:03.472 9.956 - 10.003: 98.5836% ( 2) 00:16:03.472 10.003 - 10.050: 98.5908% ( 1) 00:16:03.472 10.050 - 10.098: 98.5981% ( 1) 00:16:03.472 10.098 - 10.145: 98.6054% ( 1) 00:16:03.472 10.240 - 10.287: 98.6126% ( 1) 00:16:03.472 10.430 - 10.477: 98.6199% ( 1) 00:16:03.472 10.572 - 10.619: 98.6272% ( 1) 00:16:03.472 10.667 - 10.714: 98.6344% ( 1) 00:16:03.472 10.761 - 10.809: 98.6489% ( 2) 00:16:03.472 10.856 - 10.904: 98.6562% ( 1) 00:16:03.472 10.951 - 10.999: 98.6635% ( 1) 00:16:03.472 11.046 - 11.093: 98.6780% ( 2) 00:16:03.472 11.141 - 11.188: 98.6853% ( 1) 00:16:03.472 11.330 - 11.378: 98.6925% ( 1) 00:16:03.472 11.378 - 11.425: 98.6998% ( 1) 00:16:03.472 11.615 - 11.662: 98.7071% ( 1) 00:16:03.472 11.852 - 11.899: 98.7216% ( 2) 00:16:03.472 11.994 - 12.041: 98.7288% ( 1) 00:16:03.472 12.041 - 12.089: 98.7361% ( 1) 00:16:03.472 12.136 - 12.231: 98.7434% ( 1) 00:16:03.472 12.516 - 12.610: 98.7579% ( 2) 00:16:03.472 12.610 - 12.705: 98.7724% ( 2) 00:16:03.472 12.800 - 12.895: 98.7870% ( 2) 00:16:03.472 12.990 - 13.084: 98.8087% ( 3) 00:16:03.472 13.274 - 13.369: 98.8160% ( 1) 00:16:03.472 13.559 - 13.653: 98.8233% ( 1) 00:16:03.472 13.748 - 13.843: 98.8305% ( 1) 00:16:03.472 13.843 - 13.938: 98.8378% ( 1) 00:16:03.472 13.938 - 14.033: 98.8451% ( 1) 00:16:03.472 14.222 - 14.317: 98.8669% ( 3) 00:16:03.472 14.317 - 14.412: 98.8886% ( 3) 00:16:03.472 14.412 - 14.507: 98.9032% ( 2) 00:16:03.472 14.507 - 14.601: 98.9177% ( 2) 00:16:03.472 17.067 - 17.161: 98.9250% ( 1) 00:16:03.472 17.161 - 17.256: 98.9395% ( 2) 00:16:03.472 17.256 - 17.351: 98.9685% ( 4) 00:16:03.472 17.351 - 17.446: 98.9976% ( 4) 00:16:03.472 17.446 - 17.541: 99.0484% ( 7) 00:16:03.472 17.541 - 17.636: 99.0920% ( 6) 00:16:03.472 17.636 - 17.730: 99.1429% ( 7) 00:16:03.472 17.730 - 17.825: 99.1574% ( 2) 00:16:03.472 17.825 - 17.920: 99.2010% ( 6) 00:16:03.472 17.920 - 18.015: 99.2736% ( 10) 00:16:03.472 18.015 - 18.110: 99.3027% ( 4) 00:16:03.472 18.110 - 18.204: 99.3535% ( 7) 00:16:03.472 18.204 - 18.299: 99.4480% ( 13) 00:16:03.472 18.299 - 18.394: 99.5424% ( 13) 00:16:03.472 18.394 - 18.489: 99.6295% ( 12) 00:16:03.472 18.489 - 18.584: 99.6804% ( 7) 00:16:03.472 18.584 - 18.679: 99.7240% ( 6) 00:16:03.472 18.679 - 18.773: 99.7603% ( 5) 00:16:03.472 18.773 - 18.868: 99.7821% ( 3) 00:16:03.472 18.868 - 18.963: 99.8039% ( 3) 00:16:03.472 18.963 - 19.058: 99.8402% ( 5) 00:16:03.472 19.058 - 19.153: 99.8547% ( 2) 00:16:03.472 19.153 - 19.247: 99.8838% ( 4) 00:16:03.472 19.437 - 19.532: 99.8983% ( 2) 00:16:03.472 19.721 - 19.816: 99.9056% ( 1) 00:16:03.472 20.764 - 20.859: 99.9128% ( 1) 00:16:03.472 24.273 - 24.462: 99.9201% ( 1) 00:16:03.473 28.824 - 29.013: 99.9274% ( 1) 00:16:03.473 29.203 - 29.393: 99.9346% ( 1) 00:16:03.473 2997.665 - 3009.801: 99.9419% ( 1) 00:16:03.473 3980.705 - 4004.978: 99.9709% ( 4) 00:16:03.473 4004.978 - 4029.250: 100.0000% ( 4) 00:16:03.473 00:16:03.473 Complete histogram 00:16:03.473 ================== 00:16:03.473 Range in us Cumulative Count 00:16:03.473 2.039 - 2.050: 1.8886% ( 260) 00:16:03.473 2.050 - 2.062: 17.6727% ( 2173) 00:16:03.473 2.062 - 2.074: 22.1108% ( 611) 00:16:03.473 2.074 - 2.086: 34.3575% ( 1686) 00:16:03.473 2.086 - 2.098: 58.1100% ( 3270) 00:16:03.473 2.098 - 2.110: 63.5796% ( 753) 00:16:03.473 2.110 - 2.121: 66.2308% ( 365) 00:16:03.473 2.121 - 2.133: 69.2961% ( 422) 00:16:03.473 2.133 - 2.145: 70.2913% ( 137) 00:16:03.473 2.145 - 2.157: 76.4074% ( 842) 00:16:03.473 2.157 - 2.169: 81.7171% ( 731) 00:16:03.473 2.169 - 2.181: 83.0973% ( 190) 00:16:03.473 2.181 - 2.193: 85.3345% ( 308) 00:16:03.473 2.193 - 2.204: 87.0705% ( 239) 00:16:03.473 2.204 - 2.216: 87.7315% ( 91) 00:16:03.473 2.216 - 2.228: 91.0729% ( 460) 00:16:03.473 2.228 - 2.240: 93.3028% ( 307) 00:16:03.473 2.240 - 2.252: 94.0800% ( 107) 00:16:03.473 2.252 - 2.264: 94.5667% ( 67) 00:16:03.473 2.264 - 2.276: 94.7701% ( 28) 00:16:03.473 2.276 - 2.287: 95.1260% ( 49) 00:16:03.473 2.287 - 2.299: 95.4602% ( 46) 00:16:03.473 2.299 - 2.311: 95.5764% ( 16) 00:16:03.473 2.311 - 2.323: 95.6635% ( 12) 00:16:03.473 2.323 - 2.335: 95.9541% ( 40) 00:16:03.473 2.335 - 2.347: 96.2955% ( 47) 00:16:03.473 2.347 - 2.359: 96.5425% ( 34) 00:16:03.473 2.359 - 2.370: 96.8403% ( 41) 00:16:03.473 2.370 - 2.382: 97.1381% ( 41) 00:16:03.473 2.382 - 2.394: 97.3705% ( 32) 00:16:03.473 2.394 - 2.406: 97.5521% ( 25) 00:16:03.473 2.406 - 2.418: 97.6465% ( 13) 00:16:03.473 2.418 - 2.430: 97.7047% ( 8) 00:16:03.473 2.430 - 2.441: 97.8209% ( 16) 00:16:03.473 2.441 - 2.453: 97.9080% ( 12) 00:16:03.473 2.453 - 2.465: 97.9589% ( 7) 00:16:03.473 2.465 - 2.477: 97.9807% ( 3) 00:16:03.473 2.477 - 2.489: 98.0097% ( 4) 00:16:03.473 2.489 - 2.501: 98.0388% ( 4) 00:16:03.473 2.501 - 2.513: 98.0533% ( 2) 00:16:03.473 2.513 - 2.524: 98.0751% ( 3) 00:16:03.473 2.536 - 2.548: 98.0969% ( 3) 00:16:03.473 2.548 - 2.560: 98.1042% ( 1) 00:16:03.473 2.560 - 2.572: 98.1114% ( 1) 00:16:03.473 2.572 - 2.584: 98.1187% ( 1) 00:16:03.473 2.584 - 2.596: 98.1332% ( 2) 00:16:03.473 2.596 - 2.607: 98.1405% ( 1) 00:16:03.473 2.631 - 2.643: 98.1477% ( 1) 00:16:03.473 2.667 - 2.679: 98.1695% ( 3) 00:16:03.473 2.679 - 2.690: 98.1768% ( 1) 00:16:03.473 2.690 - 2.702: 98.1841% ( 1) 00:16:03.473 2.702 - 2.714: 98.1986% ( 2) 00:16:03.473 2.738 - 2.750: 98.2059% ( 1) 00:16:03.473 2.833 - 2.844: 98.2131% ( 1) 00:16:03.473 2.892 - 2.904: 98.2204% ( 1) 00:16:03.473 2.904 - 2.916: 98.2276% ( 1) 00:16:03.473 2.916 - 2.927: 98.2349% ( 1) 00:16:03.473 2.927 - 2.939: 98.2422% ( 1) 00:16:03.473 2.939 - 2.951: 98.2494% ( 1) 00:16:03.473 2.963 - 2.975: 98.2567% ( 1) 00:16:03.473 2.999 - 3.010: 98.2785% ( 3) 00:16:03.473 3.010 - 3.022: 98.2858% ( 1) 00:16:03.473 3.034 - 3.058: 98.2930% ( 1) 00:16:03.473 3.058 - 3.081: 98.3221% ( 4) 00:16:03.473 3.081 - 3.105: 98.3439% ( 3) 00:16:03.473 3.105 - 3.129: 98.3511% ( 1) 00:16:03.473 3.129 - 3.153: 98.3584% ( 1) 00:16:03.473 3.224 - 3.247: 98.3729% ( 2) 00:16:03.473 3.342 - 3.366: 98.3947% ( 3) 00:16:03.473 3.366 - 3.390: 98.4092% ( 2) 00:16:03.473 3.390 - 3.413: 98.4165% ( 1) 00:16:03.473 3.413 - 3.437: 98.4238% ( 1) 00:16:03.473 3.437 - 3.461: 98.4456% ( 3) 00:16:03.473 3.461 - 3.484: 98.4673% ( 3) 00:16:03.473 3.484 - 3.508: 98.4746% ( 1) 00:16:03.473 3.508 - 3.532: 98.4819% ( 1) 00:16:03.473 3.532 - 3.556: 98.4964% ( 2) 00:16:03.473 3.556 - 3.579: 98.5109% ( 2) 00:16:03.473 3.579 - 3.603: 98.5255% ( 2) 00:16:03.473 3.603 - 3.627: 98.5473% ( 3) 00:16:03.473 3.627 - 3.650: 98.5618% ( 2) 00:16:03.473 3.650 - 3.674: 98.5690% ( 1) 00:16:03.473 3.674 - 3.698: 98.5836% ( 2) 00:16:03.473 3.745 - 3.769: 98.5981% ( 2) 00:16:03.473 3.769 - 3.793: 98.6054% ( 1) 00:16:03.473 3.816 - 3.840: 98.6126% ( 1) 00:16:03.473 3.840 - 3.864: 98.6199% ( 1) 00:16:03.473 3.887 - 3.911: 98.6272% ( 1) 00:16:03.473 4.053 - 4.077: 98.6344% ( 1) 00:16:03.473 4.670 - 4.693: 98.6417% ( 1) 00:16:03.473 4.812 - 4.836: 98.6489% ( 1) 00:16:03.473 4.907 - 4.930: 98.6562% ( 1) 00:16:03.473 5.286 - 5.310: 98.6635% ( 1) 00:16:03.473 5.428 - 5.452: 98.6707% ( 1) 00:16:03.473 5.523 - 5.547: 98.6780% ( 1) 00:16:03.473 5.665 - 5.689: 98.6853% ( 1) 00:16:03.473 5.879 - 5.902: 98.6925% ( 1) 00:16:03.473 5.902 - 5.926: 98.6998% ( 1) 00:16:03.473 6.044 - 6.068: 98.7071% ( 1) 00:16:03.473 6.068 - 6.116: 98.7143% ( 1) 00:16:03.473 6.210 - 6.258: 98.7288% ( 2) 00:16:03.473 6.305 - 6.353: 98.7506% ( 3) 00:16:03.473 6.353 - 6.400: 98.7579% ( 1) 00:16:03.473 6.779 - 6.827: 98.7652% ( 1) 00:16:03.473 6.921 - 6.969: 98.7724% ( 1) 00:16:03.473 7.206 - 7.253: 98.7797% ( 1) 00:16:03.473 7.253 - 7.301: 98.7870% ( 1) 00:16:03.473 7.443 - 7.490: 98.7942% ( 1) 00:16:03.473 7.538 - 7.585: 98.8015% ( 1) 00:16:03.473 9.624 - 9.671: 98.8087% ( 1) 00:16:03.473 9.813 - 9.861: 98.8160% ( 1) 00:16:03.473 10.430 - 10.477: 98.8233% ( 1) 00:16:03.473 15.550 - 15.644: 98.8305% ( 1) 00:16:03.473 15.644 - 15.739: 98.8451% ( 2) 00:16:03.473 15.739 - 15.834: 98.8596% ( 2) 00:16:03.473 15.834 - 15.929: 98.8886% ( 4) 00:16:03.473 15.929 - 16.024: 98.9250% ( 5) 00:16:03.473 16.024 - 16.119: 98.9395% ( 2) 00:16:03.473 16.119 - 16.213: 98.9685% ( 4) 00:16:03.473 16.213 - 16.308: 99.0049% ( 5) 00:16:03.473 16.308 - 16.403: 99.0339% ( 4) 00:16:03.473 16.403 - 16.498: 99.0775% ( 6) 00:16:03.473 16.498 - 16.593: 99.1066% ( 4) 00:16:03.473 16.593 - 16.687: 99.1356% ( 4) 00:16:03.473 16.687 - 16.782: 99.1937% ( 8) 00:16:03.473 16.782 - 16.877: 99.2300% ( 5) 00:16:03.473 16.877 - 16.972: 99.2518% ( 3) 00:16:03.473 16.972 - 17.067: 99.2736% ( 3) 00:16:03.473 17.067 - 17.161: 99.2882% ( 2) 00:16:03.473 17.161 - 17.256: 99.3027% ( 2) 00:16:03.473 17.256 - 17.351: 99.3099% ( 1) 00:16:03.473 17.351 - 17.446: 99.3172% ( 1) 00:16:03.473 17.541 - 17.636: 99.3317% ( 2) 00:16:03.473 17.636 - 17.730: 99.3390% ( 1) 00:16:03.473 17.825 - 17.920: 99.3535% ( 2) 00:16:03.473 17.920 - 18.015: 99.3608% ( 1) 00:16:03.473 18.015 - 18.110: 99.3681% ( 1) 00:16:03.473 18.110 - 18.204: 99.3826% ( 2) 00:16:03.473 18.204 - 18.299: 99.3898% ( 1) 00:16:03.473 18.299 - 18.394: 99.3971% ( 1) 00:16:03.473 1389.606 - 1395.674: 99.4044% ( 1) 00:16:03.473 3980.705 - 4004.978: 99.8039% ( 55) 00:16:03.473 4004.978 - 4029.250: 99.9855% ( 25) 00:16:03.473 4053.523 - 4077.796: 99.9927% ( 1) 00:16:03.473 5971.058 - 5995.330: 100.0000% ( 1) 00:16:03.473 00:16:03.473 10:46:20 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:03.473 10:46:20 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:03.473 10:46:20 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:03.473 10:46:20 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:03.473 10:46:20 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:03.473 [ 00:16:03.473 { 00:16:03.473 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:03.473 "subtype": "Discovery", 00:16:03.473 "listen_addresses": [], 00:16:03.473 "allow_any_host": true, 00:16:03.473 "hosts": [] 00:16:03.473 }, 00:16:03.473 { 00:16:03.473 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:03.473 "subtype": "NVMe", 00:16:03.473 "listen_addresses": [ 00:16:03.473 { 00:16:03.473 "transport": "VFIOUSER", 00:16:03.473 "trtype": "VFIOUSER", 00:16:03.473 "adrfam": "IPv4", 00:16:03.473 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:03.473 "trsvcid": "0" 00:16:03.473 } 00:16:03.473 ], 00:16:03.473 "allow_any_host": true, 00:16:03.473 "hosts": [], 00:16:03.473 "serial_number": "SPDK1", 00:16:03.473 "model_number": "SPDK bdev Controller", 00:16:03.473 "max_namespaces": 32, 00:16:03.473 "min_cntlid": 1, 00:16:03.473 "max_cntlid": 65519, 00:16:03.473 "namespaces": [ 00:16:03.473 { 00:16:03.473 "nsid": 1, 00:16:03.473 "bdev_name": "Malloc1", 00:16:03.473 "name": "Malloc1", 00:16:03.473 "nguid": "490954030A6346D79E9E531A34472611", 00:16:03.473 "uuid": "49095403-0a63-46d7-9e9e-531a34472611" 00:16:03.473 }, 00:16:03.473 { 00:16:03.473 "nsid": 2, 00:16:03.473 "bdev_name": "Malloc3", 00:16:03.473 "name": "Malloc3", 00:16:03.473 "nguid": "439C359100F945F48DD502F80F8530B6", 00:16:03.473 "uuid": "439c3591-00f9-45f4-8dd5-02f80f8530b6" 00:16:03.473 } 00:16:03.473 ] 00:16:03.473 }, 00:16:03.473 { 00:16:03.473 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:03.473 "subtype": "NVMe", 00:16:03.474 "listen_addresses": [ 00:16:03.474 { 00:16:03.474 "transport": "VFIOUSER", 00:16:03.474 "trtype": "VFIOUSER", 00:16:03.474 "adrfam": "IPv4", 00:16:03.474 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:03.474 "trsvcid": "0" 00:16:03.474 } 00:16:03.474 ], 00:16:03.474 "allow_any_host": true, 00:16:03.474 "hosts": [], 00:16:03.474 "serial_number": "SPDK2", 00:16:03.474 "model_number": "SPDK bdev Controller", 00:16:03.474 "max_namespaces": 32, 00:16:03.474 "min_cntlid": 1, 00:16:03.474 "max_cntlid": 65519, 00:16:03.474 "namespaces": [ 00:16:03.474 { 00:16:03.474 "nsid": 1, 00:16:03.474 "bdev_name": "Malloc2", 00:16:03.474 "name": "Malloc2", 00:16:03.474 "nguid": "89DA9F3C2AFD4918B0D6137F4643A58C", 00:16:03.474 "uuid": "89da9f3c-2afd-4918-b0d6-137f4643a58c" 00:16:03.474 } 00:16:03.474 ] 00:16:03.474 } 00:16:03.474 ] 00:16:03.731 10:46:20 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:03.731 10:46:20 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3434962 00:16:03.731 10:46:20 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:03.731 10:46:20 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:03.731 10:46:20 -- common/autotest_common.sh@1244 -- # local i=0 00:16:03.731 10:46:20 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.731 10:46:20 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.731 10:46:20 -- common/autotest_common.sh@1255 -- # return 0 00:16:03.731 10:46:20 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:03.731 10:46:20 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:03.731 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.988 Malloc4 00:16:03.988 10:46:20 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:03.988 10:46:20 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.245 Asynchronous Event Request test 00:16:04.245 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.245 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.245 Registering asynchronous event callbacks... 00:16:04.245 Starting namespace attribute notice tests for all controllers... 00:16:04.245 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:04.245 aer_cb - Changed Namespace 00:16:04.245 Cleaning up... 00:16:04.245 [ 00:16:04.245 { 00:16:04.245 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.245 "subtype": "Discovery", 00:16:04.245 "listen_addresses": [], 00:16:04.245 "allow_any_host": true, 00:16:04.245 "hosts": [] 00:16:04.245 }, 00:16:04.245 { 00:16:04.245 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.245 "subtype": "NVMe", 00:16:04.245 "listen_addresses": [ 00:16:04.245 { 00:16:04.245 "transport": "VFIOUSER", 00:16:04.245 "trtype": "VFIOUSER", 00:16:04.245 "adrfam": "IPv4", 00:16:04.245 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.245 "trsvcid": "0" 00:16:04.245 } 00:16:04.245 ], 00:16:04.245 "allow_any_host": true, 00:16:04.245 "hosts": [], 00:16:04.245 "serial_number": "SPDK1", 00:16:04.245 "model_number": "SPDK bdev Controller", 00:16:04.245 "max_namespaces": 32, 00:16:04.245 "min_cntlid": 1, 00:16:04.245 "max_cntlid": 65519, 00:16:04.245 "namespaces": [ 00:16:04.245 { 00:16:04.245 "nsid": 1, 00:16:04.245 "bdev_name": "Malloc1", 00:16:04.245 "name": "Malloc1", 00:16:04.245 "nguid": "490954030A6346D79E9E531A34472611", 00:16:04.245 "uuid": "49095403-0a63-46d7-9e9e-531a34472611" 00:16:04.245 }, 00:16:04.245 { 00:16:04.245 "nsid": 2, 00:16:04.245 "bdev_name": "Malloc3", 00:16:04.245 "name": "Malloc3", 00:16:04.245 "nguid": "439C359100F945F48DD502F80F8530B6", 00:16:04.245 "uuid": "439c3591-00f9-45f4-8dd5-02f80f8530b6" 00:16:04.245 } 00:16:04.245 ] 00:16:04.245 }, 00:16:04.245 { 00:16:04.245 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.245 "subtype": "NVMe", 00:16:04.245 "listen_addresses": [ 00:16:04.245 { 00:16:04.245 "transport": "VFIOUSER", 00:16:04.245 "trtype": "VFIOUSER", 00:16:04.245 "adrfam": "IPv4", 00:16:04.245 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.245 "trsvcid": "0" 00:16:04.245 } 00:16:04.245 ], 00:16:04.245 "allow_any_host": true, 00:16:04.245 "hosts": [], 00:16:04.245 "serial_number": "SPDK2", 00:16:04.245 "model_number": "SPDK bdev Controller", 00:16:04.245 "max_namespaces": 32, 00:16:04.245 "min_cntlid": 1, 00:16:04.245 "max_cntlid": 65519, 00:16:04.245 "namespaces": [ 00:16:04.245 { 00:16:04.245 "nsid": 1, 00:16:04.245 "bdev_name": "Malloc2", 00:16:04.245 "name": "Malloc2", 00:16:04.245 "nguid": "89DA9F3C2AFD4918B0D6137F4643A58C", 00:16:04.245 "uuid": "89da9f3c-2afd-4918-b0d6-137f4643a58c" 00:16:04.245 }, 00:16:04.245 { 00:16:04.245 "nsid": 2, 00:16:04.245 "bdev_name": "Malloc4", 00:16:04.245 "name": "Malloc4", 00:16:04.245 "nguid": "A705ECB577BB4A5F8B5B4D21833FF533", 00:16:04.245 "uuid": "a705ecb5-77bb-4a5f-8b5b-4d21833ff533" 00:16:04.245 } 00:16:04.245 ] 00:16:04.245 } 00:16:04.245 ] 00:16:04.245 10:46:21 -- target/nvmf_vfio_user.sh@44 -- # wait 3434962 00:16:04.245 10:46:21 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:04.245 10:46:21 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3429136 00:16:04.245 10:46:21 -- common/autotest_common.sh@926 -- # '[' -z 3429136 ']' 00:16:04.245 10:46:21 -- common/autotest_common.sh@930 -- # kill -0 3429136 00:16:04.245 10:46:21 -- common/autotest_common.sh@931 -- # uname 00:16:04.245 10:46:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:04.245 10:46:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3429136 00:16:04.503 10:46:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:04.503 10:46:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:04.503 10:46:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3429136' 00:16:04.503 killing process with pid 3429136 00:16:04.503 10:46:21 -- common/autotest_common.sh@945 -- # kill 3429136 00:16:04.503 [2024-07-10 10:46:21.084392] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:04.503 10:46:21 -- common/autotest_common.sh@950 -- # wait 3429136 00:16:04.760 10:46:21 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:04.760 10:46:21 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:04.760 10:46:21 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:04.760 10:46:21 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:04.760 10:46:21 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:04.760 10:46:21 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3435110 00:16:04.760 10:46:21 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:04.760 10:46:21 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3435110' 00:16:04.760 Process pid: 3435110 00:16:04.760 10:46:21 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:04.760 10:46:21 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3435110 00:16:04.760 10:46:21 -- common/autotest_common.sh@819 -- # '[' -z 3435110 ']' 00:16:04.760 10:46:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.760 10:46:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:04.760 10:46:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.760 10:46:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:04.761 10:46:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.761 [2024-07-10 10:46:21.468353] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:04.761 [2024-07-10 10:46:21.469465] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:04.761 [2024-07-10 10:46:21.469529] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.761 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.761 [2024-07-10 10:46:21.527654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.018 [2024-07-10 10:46:21.612044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:05.018 [2024-07-10 10:46:21.612184] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.018 [2024-07-10 10:46:21.612201] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.018 [2024-07-10 10:46:21.612213] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.018 [2024-07-10 10:46:21.612271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.018 [2024-07-10 10:46:21.612331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.018 [2024-07-10 10:46:21.612396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.018 [2024-07-10 10:46:21.612399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.018 [2024-07-10 10:46:21.713641] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:05.018 [2024-07-10 10:46:21.713942] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:05.018 [2024-07-10 10:46:21.714177] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:05.018 [2024-07-10 10:46:21.714956] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:05.018 [2024-07-10 10:46:21.715060] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:05.948 10:46:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:05.948 10:46:22 -- common/autotest_common.sh@852 -- # return 0 00:16:05.949 10:46:22 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:06.878 10:46:23 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:06.878 10:46:23 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:06.878 10:46:23 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:06.878 10:46:23 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:06.879 10:46:23 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:06.879 10:46:23 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:07.146 Malloc1 00:16:07.146 10:46:23 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:07.403 10:46:24 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:07.660 10:46:24 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:07.916 10:46:24 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.916 10:46:24 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:07.916 10:46:24 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:08.478 Malloc2 00:16:08.478 10:46:25 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:08.478 10:46:25 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:08.765 10:46:25 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:09.023 10:46:25 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:09.023 10:46:25 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3435110 00:16:09.023 10:46:25 -- common/autotest_common.sh@926 -- # '[' -z 3435110 ']' 00:16:09.023 10:46:25 -- common/autotest_common.sh@930 -- # kill -0 3435110 00:16:09.023 10:46:25 -- common/autotest_common.sh@931 -- # uname 00:16:09.023 10:46:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.023 10:46:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3435110 00:16:09.023 10:46:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:09.023 10:46:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:09.023 10:46:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3435110' 00:16:09.023 killing process with pid 3435110 00:16:09.023 10:46:25 -- common/autotest_common.sh@945 -- # kill 3435110 00:16:09.023 10:46:25 -- common/autotest_common.sh@950 -- # wait 3435110 00:16:09.281 10:46:26 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:09.281 10:46:26 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:09.281 00:16:09.281 real 0m53.693s 00:16:09.281 user 3m32.289s 00:16:09.281 sys 0m4.683s 00:16:09.281 10:46:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.281 10:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:09.281 ************************************ 00:16:09.281 END TEST nvmf_vfio_user 00:16:09.281 ************************************ 00:16:09.539 10:46:26 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:09.539 10:46:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:09.539 10:46:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:09.539 10:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:09.539 ************************************ 00:16:09.540 START TEST nvmf_vfio_user_nvme_compliance 00:16:09.540 ************************************ 00:16:09.540 10:46:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:09.540 * Looking for test storage... 00:16:09.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:09.540 10:46:26 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.540 10:46:26 -- nvmf/common.sh@7 -- # uname -s 00:16:09.540 10:46:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.540 10:46:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.540 10:46:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.540 10:46:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.540 10:46:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.540 10:46:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.540 10:46:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.540 10:46:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.540 10:46:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.540 10:46:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.540 10:46:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.540 10:46:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.540 10:46:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.540 10:46:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.540 10:46:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.540 10:46:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.540 10:46:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.540 10:46:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.540 10:46:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.540 10:46:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.540 10:46:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.540 10:46:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.540 10:46:26 -- paths/export.sh@5 -- # export PATH 00:16:09.540 10:46:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.540 10:46:26 -- nvmf/common.sh@46 -- # : 0 00:16:09.540 10:46:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:09.540 10:46:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:09.540 10:46:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:09.540 10:46:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.540 10:46:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.540 10:46:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:09.540 10:46:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:09.540 10:46:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:09.540 10:46:26 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:09.540 10:46:26 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:09.540 10:46:26 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:09.540 10:46:26 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:09.540 10:46:26 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:09.540 10:46:26 -- compliance/compliance.sh@20 -- # nvmfpid=3435789 00:16:09.540 10:46:26 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:09.540 10:46:26 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3435789' 00:16:09.540 Process pid: 3435789 00:16:09.540 10:46:26 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:09.540 10:46:26 -- compliance/compliance.sh@24 -- # waitforlisten 3435789 00:16:09.540 10:46:26 -- common/autotest_common.sh@819 -- # '[' -z 3435789 ']' 00:16:09.540 10:46:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.540 10:46:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:09.540 10:46:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.540 10:46:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:09.540 10:46:26 -- common/autotest_common.sh@10 -- # set +x 00:16:09.540 [2024-07-10 10:46:26.217334] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:09.540 [2024-07-10 10:46:26.217461] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.540 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.540 [2024-07-10 10:46:26.279304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:09.797 [2024-07-10 10:46:26.369499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:09.797 [2024-07-10 10:46:26.369657] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.797 [2024-07-10 10:46:26.369685] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.797 [2024-07-10 10:46:26.369702] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.797 [2024-07-10 10:46:26.369784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.797 [2024-07-10 10:46:26.369839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.797 [2024-07-10 10:46:26.369856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.361 10:46:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:10.361 10:46:27 -- common/autotest_common.sh@852 -- # return 0 00:16:10.361 10:46:27 -- compliance/compliance.sh@26 -- # sleep 1 00:16:11.730 10:46:28 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:11.730 10:46:28 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:11.730 10:46:28 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:11.730 10:46:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.730 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:16:11.730 10:46:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.730 10:46:28 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:11.730 10:46:28 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:11.730 10:46:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.730 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:16:11.730 malloc0 00:16:11.730 10:46:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.730 10:46:28 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:11.730 10:46:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.730 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:16:11.730 10:46:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.730 10:46:28 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:11.730 10:46:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.730 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:16:11.730 10:46:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.730 10:46:28 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:11.730 10:46:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.730 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:16:11.730 10:46:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.730 10:46:28 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:11.730 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.730 00:16:11.730 00:16:11.730 CUnit - A unit testing framework for C - Version 2.1-3 00:16:11.730 http://cunit.sourceforge.net/ 00:16:11.730 00:16:11.730 00:16:11.730 Suite: nvme_compliance 00:16:11.730 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-10 10:46:28.423380] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:11.730 [2024-07-10 10:46:28.423443] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:11.730 [2024-07-10 10:46:28.423458] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:11.730 passed 00:16:11.730 Test: admin_identify_ctrlr_verify_fused ...passed 00:16:11.987 Test: admin_identify_ns ...[2024-07-10 10:46:28.664440] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:11.987 [2024-07-10 10:46:28.672444] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:11.987 passed 00:16:12.244 Test: admin_get_features_mandatory_features ...passed 00:16:12.244 Test: admin_get_features_optional_features ...passed 00:16:12.502 Test: admin_set_features_number_of_queues ...passed 00:16:12.502 Test: admin_get_log_page_mandatory_logs ...passed 00:16:12.502 Test: admin_get_log_page_with_lpo ...[2024-07-10 10:46:29.308446] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:12.759 passed 00:16:12.759 Test: fabric_property_get ...passed 00:16:12.759 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-10 10:46:29.495165] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:12.759 passed 00:16:13.016 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-10 10:46:29.667437] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.016 [2024-07-10 10:46:29.683440] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.016 passed 00:16:13.016 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-10 10:46:29.774608] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:13.016 passed 00:16:13.273 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-10 10:46:29.937434] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:13.273 [2024-07-10 10:46:29.961433] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.273 passed 00:16:13.273 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-10 10:46:30.054684] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:13.273 [2024-07-10 10:46:30.054751] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:13.273 passed 00:16:13.530 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-10 10:46:30.226456] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:13.531 [2024-07-10 10:46:30.234434] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:13.531 [2024-07-10 10:46:30.242463] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:13.531 [2024-07-10 10:46:30.250434] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:13.531 passed 00:16:13.788 Test: admin_create_io_sq_verify_pc ...[2024-07-10 10:46:30.379448] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:13.788 passed 00:16:15.158 Test: admin_create_io_qp_max_qps ...[2024-07-10 10:46:31.586457] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:15.415 passed 00:16:15.415 Test: admin_create_io_sq_shared_cq ...[2024-07-10 10:46:32.185453] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:15.672 passed 00:16:15.672 00:16:15.672 Run Summary: Type Total Ran Passed Failed Inactive 00:16:15.672 suites 1 1 n/a 0 0 00:16:15.672 tests 18 18 18 0 0 00:16:15.672 asserts 360 360 360 0 n/a 00:16:15.672 00:16:15.672 Elapsed time = 1.578 seconds 00:16:15.672 10:46:32 -- compliance/compliance.sh@42 -- # killprocess 3435789 00:16:15.672 10:46:32 -- common/autotest_common.sh@926 -- # '[' -z 3435789 ']' 00:16:15.672 10:46:32 -- common/autotest_common.sh@930 -- # kill -0 3435789 00:16:15.672 10:46:32 -- common/autotest_common.sh@931 -- # uname 00:16:15.672 10:46:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:15.672 10:46:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3435789 00:16:15.672 10:46:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:15.672 10:46:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:15.672 10:46:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3435789' 00:16:15.672 killing process with pid 3435789 00:16:15.672 10:46:32 -- common/autotest_common.sh@945 -- # kill 3435789 00:16:15.672 10:46:32 -- common/autotest_common.sh@950 -- # wait 3435789 00:16:15.930 10:46:32 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:15.930 10:46:32 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:15.930 00:16:15.930 real 0m6.438s 00:16:15.930 user 0m18.512s 00:16:15.930 sys 0m0.563s 00:16:15.930 10:46:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.930 10:46:32 -- common/autotest_common.sh@10 -- # set +x 00:16:15.930 ************************************ 00:16:15.930 END TEST nvmf_vfio_user_nvme_compliance 00:16:15.930 ************************************ 00:16:15.930 10:46:32 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:15.930 10:46:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:15.930 10:46:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:15.930 10:46:32 -- common/autotest_common.sh@10 -- # set +x 00:16:15.930 ************************************ 00:16:15.930 START TEST nvmf_vfio_user_fuzz 00:16:15.930 ************************************ 00:16:15.930 10:46:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:15.930 * Looking for test storage... 00:16:15.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.930 10:46:32 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.930 10:46:32 -- nvmf/common.sh@7 -- # uname -s 00:16:15.930 10:46:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.930 10:46:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.930 10:46:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.931 10:46:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.931 10:46:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.931 10:46:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.931 10:46:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.931 10:46:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.931 10:46:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.931 10:46:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.931 10:46:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.931 10:46:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.931 10:46:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.931 10:46:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.931 10:46:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.931 10:46:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.931 10:46:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.931 10:46:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.931 10:46:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.931 10:46:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.931 10:46:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.931 10:46:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.931 10:46:32 -- paths/export.sh@5 -- # export PATH 00:16:15.931 10:46:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.931 10:46:32 -- nvmf/common.sh@46 -- # : 0 00:16:15.931 10:46:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:15.931 10:46:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:15.931 10:46:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:15.931 10:46:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.931 10:46:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.931 10:46:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:15.931 10:46:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:15.931 10:46:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3436547 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3436547' 00:16:15.931 Process pid: 3436547 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:15.931 10:46:32 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3436547 00:16:15.931 10:46:32 -- common/autotest_common.sh@819 -- # '[' -z 3436547 ']' 00:16:15.931 10:46:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.931 10:46:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:15.931 10:46:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.931 10:46:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:15.931 10:46:32 -- common/autotest_common.sh@10 -- # set +x 00:16:16.862 10:46:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:16.862 10:46:33 -- common/autotest_common.sh@852 -- # return 0 00:16:16.862 10:46:33 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:18.235 10:46:34 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:18.235 10:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.235 10:46:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.235 10:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.235 10:46:34 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:18.235 10:46:34 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:18.235 10:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.235 10:46:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.235 malloc0 00:16:18.235 10:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.235 10:46:34 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:18.235 10:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.235 10:46:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.235 10:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.235 10:46:34 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:18.235 10:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.235 10:46:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.235 10:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.235 10:46:34 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:18.235 10:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.235 10:46:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.235 10:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.235 10:46:34 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:18.235 10:46:34 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:50.294 Fuzzing completed. Shutting down the fuzz application 00:16:50.294 00:16:50.294 Dumping successful admin opcodes: 00:16:50.294 8, 9, 10, 24, 00:16:50.294 Dumping successful io opcodes: 00:16:50.294 0, 00:16:50.294 NS: 0x200003a1ef00 I/O qp, Total commands completed: 576672, total successful commands: 2218, random_seed: 444910144 00:16:50.294 NS: 0x200003a1ef00 admin qp, Total commands completed: 143258, total successful commands: 1164, random_seed: 2644091776 00:16:50.294 10:47:05 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:50.294 10:47:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:50.294 10:47:05 -- common/autotest_common.sh@10 -- # set +x 00:16:50.294 10:47:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:50.294 10:47:05 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3436547 00:16:50.294 10:47:05 -- common/autotest_common.sh@926 -- # '[' -z 3436547 ']' 00:16:50.294 10:47:05 -- common/autotest_common.sh@930 -- # kill -0 3436547 00:16:50.294 10:47:05 -- common/autotest_common.sh@931 -- # uname 00:16:50.294 10:47:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:50.294 10:47:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3436547 00:16:50.294 10:47:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:50.294 10:47:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:50.294 10:47:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3436547' 00:16:50.294 killing process with pid 3436547 00:16:50.294 10:47:05 -- common/autotest_common.sh@945 -- # kill 3436547 00:16:50.294 10:47:05 -- common/autotest_common.sh@950 -- # wait 3436547 00:16:50.294 10:47:05 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:50.294 10:47:05 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:50.294 00:16:50.294 real 0m33.262s 00:16:50.294 user 0m34.549s 00:16:50.294 sys 0m26.304s 00:16:50.294 10:47:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.294 10:47:05 -- common/autotest_common.sh@10 -- # set +x 00:16:50.294 ************************************ 00:16:50.294 END TEST nvmf_vfio_user_fuzz 00:16:50.294 ************************************ 00:16:50.294 10:47:05 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:50.294 10:47:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:50.294 10:47:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:50.294 10:47:05 -- common/autotest_common.sh@10 -- # set +x 00:16:50.294 ************************************ 00:16:50.294 START TEST nvmf_host_management 00:16:50.294 ************************************ 00:16:50.294 10:47:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:50.294 * Looking for test storage... 00:16:50.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.294 10:47:05 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.294 10:47:05 -- nvmf/common.sh@7 -- # uname -s 00:16:50.294 10:47:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.294 10:47:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.294 10:47:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.294 10:47:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.294 10:47:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.294 10:47:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.294 10:47:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.294 10:47:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.294 10:47:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.294 10:47:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.295 10:47:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.295 10:47:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.295 10:47:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.295 10:47:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.295 10:47:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.295 10:47:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.295 10:47:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.295 10:47:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.295 10:47:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.295 10:47:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.295 10:47:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.295 10:47:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.295 10:47:05 -- paths/export.sh@5 -- # export PATH 00:16:50.295 10:47:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.295 10:47:05 -- nvmf/common.sh@46 -- # : 0 00:16:50.295 10:47:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:50.295 10:47:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:50.295 10:47:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:50.295 10:47:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.295 10:47:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.295 10:47:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:50.295 10:47:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:50.295 10:47:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:50.295 10:47:05 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.295 10:47:05 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.295 10:47:05 -- target/host_management.sh@104 -- # nvmftestinit 00:16:50.295 10:47:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:50.295 10:47:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.295 10:47:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:50.295 10:47:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:50.295 10:47:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:50.295 10:47:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.295 10:47:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.295 10:47:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.295 10:47:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:50.295 10:47:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:50.295 10:47:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:50.295 10:47:05 -- common/autotest_common.sh@10 -- # set +x 00:16:51.232 10:47:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:51.232 10:47:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:51.232 10:47:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:51.232 10:47:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:51.232 10:47:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:51.232 10:47:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:51.232 10:47:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:51.232 10:47:07 -- nvmf/common.sh@294 -- # net_devs=() 00:16:51.232 10:47:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:51.232 10:47:07 -- nvmf/common.sh@295 -- # e810=() 00:16:51.232 10:47:07 -- nvmf/common.sh@295 -- # local -ga e810 00:16:51.232 10:47:07 -- nvmf/common.sh@296 -- # x722=() 00:16:51.232 10:47:07 -- nvmf/common.sh@296 -- # local -ga x722 00:16:51.232 10:47:07 -- nvmf/common.sh@297 -- # mlx=() 00:16:51.232 10:47:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:51.232 10:47:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.232 10:47:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.232 10:47:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.232 10:47:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.232 10:47:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.232 10:47:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.232 10:47:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.232 10:47:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.232 10:47:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.232 10:47:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.232 10:47:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.232 10:47:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:51.232 10:47:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:51.232 10:47:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:51.232 10:47:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:51.232 10:47:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:51.232 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:51.232 10:47:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:51.232 10:47:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:51.232 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:51.232 10:47:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:51.232 10:47:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:51.232 10:47:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.232 10:47:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:51.232 10:47:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.232 10:47:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:51.232 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:51.232 10:47:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.232 10:47:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:51.232 10:47:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.232 10:47:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:51.232 10:47:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.232 10:47:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:51.232 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:51.232 10:47:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.232 10:47:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:51.232 10:47:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:51.232 10:47:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:51.232 10:47:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.232 10:47:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.232 10:47:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.232 10:47:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:51.232 10:47:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.232 10:47:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.232 10:47:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:51.232 10:47:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.232 10:47:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.232 10:47:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:51.232 10:47:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:51.232 10:47:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.232 10:47:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.232 10:47:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.232 10:47:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.232 10:47:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:51.232 10:47:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.232 10:47:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.232 10:47:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.232 10:47:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:51.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:16:51.232 00:16:51.232 --- 10.0.0.2 ping statistics --- 00:16:51.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.232 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:16:51.232 10:47:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:16:51.232 00:16:51.232 --- 10.0.0.1 ping statistics --- 00:16:51.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.232 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:16:51.232 10:47:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.232 10:47:07 -- nvmf/common.sh@410 -- # return 0 00:16:51.232 10:47:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:51.232 10:47:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.232 10:47:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:51.232 10:47:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.232 10:47:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:51.232 10:47:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:51.232 10:47:07 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:51.232 10:47:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:51.232 10:47:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:51.232 10:47:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.232 ************************************ 00:16:51.232 START TEST nvmf_host_management 00:16:51.232 ************************************ 00:16:51.232 10:47:07 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:51.232 10:47:07 -- target/host_management.sh@69 -- # starttarget 00:16:51.232 10:47:07 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:51.232 10:47:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:51.232 10:47:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:51.232 10:47:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.232 10:47:07 -- nvmf/common.sh@469 -- # nvmfpid=3442240 00:16:51.232 10:47:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:51.232 10:47:07 -- nvmf/common.sh@470 -- # waitforlisten 3442240 00:16:51.232 10:47:07 -- common/autotest_common.sh@819 -- # '[' -z 3442240 ']' 00:16:51.232 10:47:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.232 10:47:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.232 10:47:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.233 10:47:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.233 10:47:07 -- common/autotest_common.sh@10 -- # set +x 00:16:51.233 [2024-07-10 10:47:08.020828] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:51.233 [2024-07-10 10:47:08.020901] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.530 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.530 [2024-07-10 10:47:08.087514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.530 [2024-07-10 10:47:08.178043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:51.530 [2024-07-10 10:47:08.178180] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.530 [2024-07-10 10:47:08.178196] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.530 [2024-07-10 10:47:08.178208] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.530 [2024-07-10 10:47:08.178340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.530 [2024-07-10 10:47:08.178407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.530 [2024-07-10 10:47:08.178458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:51.530 [2024-07-10 10:47:08.178462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.488 10:47:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.488 10:47:08 -- common/autotest_common.sh@852 -- # return 0 00:16:52.488 10:47:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:52.488 10:47:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:52.488 10:47:08 -- common/autotest_common.sh@10 -- # set +x 00:16:52.488 10:47:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.488 10:47:08 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:52.488 10:47:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.488 10:47:08 -- common/autotest_common.sh@10 -- # set +x 00:16:52.488 [2024-07-10 10:47:08.992997] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.488 10:47:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.488 10:47:09 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:52.488 10:47:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:52.488 10:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:52.488 10:47:09 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:52.488 10:47:09 -- target/host_management.sh@23 -- # cat 00:16:52.488 10:47:09 -- target/host_management.sh@30 -- # rpc_cmd 00:16:52.488 10:47:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.488 10:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:52.488 Malloc0 00:16:52.488 [2024-07-10 10:47:09.053957] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.488 10:47:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.488 10:47:09 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:52.488 10:47:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:52.488 10:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:52.488 10:47:09 -- target/host_management.sh@73 -- # perfpid=3442425 00:16:52.488 10:47:09 -- target/host_management.sh@74 -- # waitforlisten 3442425 /var/tmp/bdevperf.sock 00:16:52.488 10:47:09 -- common/autotest_common.sh@819 -- # '[' -z 3442425 ']' 00:16:52.488 10:47:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.488 10:47:09 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:52.488 10:47:09 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:52.488 10:47:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:52.488 10:47:09 -- nvmf/common.sh@520 -- # config=() 00:16:52.488 10:47:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.488 10:47:09 -- nvmf/common.sh@520 -- # local subsystem config 00:16:52.488 10:47:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:52.488 10:47:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:52.488 10:47:09 -- common/autotest_common.sh@10 -- # set +x 00:16:52.488 10:47:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:52.488 { 00:16:52.488 "params": { 00:16:52.488 "name": "Nvme$subsystem", 00:16:52.488 "trtype": "$TEST_TRANSPORT", 00:16:52.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.488 "adrfam": "ipv4", 00:16:52.488 "trsvcid": "$NVMF_PORT", 00:16:52.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.488 "hdgst": ${hdgst:-false}, 00:16:52.488 "ddgst": ${ddgst:-false} 00:16:52.488 }, 00:16:52.488 "method": "bdev_nvme_attach_controller" 00:16:52.488 } 00:16:52.488 EOF 00:16:52.488 )") 00:16:52.488 10:47:09 -- nvmf/common.sh@542 -- # cat 00:16:52.488 10:47:09 -- nvmf/common.sh@544 -- # jq . 00:16:52.488 10:47:09 -- nvmf/common.sh@545 -- # IFS=, 00:16:52.488 10:47:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:52.488 "params": { 00:16:52.488 "name": "Nvme0", 00:16:52.488 "trtype": "tcp", 00:16:52.488 "traddr": "10.0.0.2", 00:16:52.488 "adrfam": "ipv4", 00:16:52.488 "trsvcid": "4420", 00:16:52.488 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:52.488 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:52.488 "hdgst": false, 00:16:52.488 "ddgst": false 00:16:52.488 }, 00:16:52.488 "method": "bdev_nvme_attach_controller" 00:16:52.488 }' 00:16:52.488 [2024-07-10 10:47:09.129875] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:52.488 [2024-07-10 10:47:09.129961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3442425 ] 00:16:52.488 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.488 [2024-07-10 10:47:09.190594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.488 [2024-07-10 10:47:09.275537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.746 Running I/O for 10 seconds... 00:16:53.313 10:47:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:53.313 10:47:10 -- common/autotest_common.sh@852 -- # return 0 00:16:53.313 10:47:10 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:53.313 10:47:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.313 10:47:10 -- common/autotest_common.sh@10 -- # set +x 00:16:53.313 10:47:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.313 10:47:10 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:53.313 10:47:10 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:53.313 10:47:10 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:53.313 10:47:10 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:53.313 10:47:10 -- target/host_management.sh@52 -- # local ret=1 00:16:53.313 10:47:10 -- target/host_management.sh@53 -- # local i 00:16:53.313 10:47:10 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:53.313 10:47:10 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:53.313 10:47:10 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:53.313 10:47:10 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:53.313 10:47:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.313 10:47:10 -- common/autotest_common.sh@10 -- # set +x 00:16:53.313 10:47:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.313 10:47:10 -- target/host_management.sh@55 -- # read_io_count=1680 00:16:53.313 10:47:10 -- target/host_management.sh@58 -- # '[' 1680 -ge 100 ']' 00:16:53.313 10:47:10 -- target/host_management.sh@59 -- # ret=0 00:16:53.313 10:47:10 -- target/host_management.sh@60 -- # break 00:16:53.313 10:47:10 -- target/host_management.sh@64 -- # return 0 00:16:53.313 10:47:10 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:53.313 10:47:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.313 10:47:10 -- common/autotest_common.sh@10 -- # set +x 00:16:53.313 [2024-07-10 10:47:10.121586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.313 [2024-07-10 10:47:10.121850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.121996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1370 is same with the state(5) to be set 00:16:53.314 [2024-07-10 10:47:10.122640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.122683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.122722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.122740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.122757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.122782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.122800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.122824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.122842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.122857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.122874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.122889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.122906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.122921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.122937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.122952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.122969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.122984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.123980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.123995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.124012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.124027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.124046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.124062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.124079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.124094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.124110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.124125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.124141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.124156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.124173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.124187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.124204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.124219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.124235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.124250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.314 [2024-07-10 10:47:10.124266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.314 [2024-07-10 10:47:10.124281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.315 [2024-07-10 10:47:10.124791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.315 [2024-07-10 10:47:10.124873] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1930080 was disconnected and freed. reset controller. 00:16:53.315 10:47:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.315 [2024-07-10 10:47:10.125999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:53.315 10:47:10 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:53.315 10:47:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.315 10:47:10 -- common/autotest_common.sh@10 -- # set +x 00:16:53.315 task offset: 98304 on job bdev=Nvme0n1 fails 00:16:53.315 00:16:53.315 Latency(us) 00:16:53.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.315 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:53.315 Job: Nvme0n1 ended in about 0.56 seconds with error 00:16:53.315 Verification LBA range: start 0x0 length 0x400 00:16:53.315 Nvme0n1 : 0.56 3185.68 199.10 115.12 0.00 19101.89 3046.21 26020.22 00:16:53.315 =================================================================================================================== 00:16:53.315 Total : 3185.68 199.10 115.12 0.00 19101.89 3046.21 26020.22 00:16:53.315 [2024-07-10 10:47:10.127924] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:53.315 [2024-07-10 10:47:10.127953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1935c20 (9): Bad file descriptor 00:16:53.315 10:47:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.315 10:47:10 -- target/host_management.sh@87 -- # sleep 1 00:16:53.573 [2024-07-10 10:47:10.140781] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:54.550 10:47:11 -- target/host_management.sh@91 -- # kill -9 3442425 00:16:54.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3442425) - No such process 00:16:54.550 10:47:11 -- target/host_management.sh@91 -- # true 00:16:54.550 10:47:11 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:54.550 10:47:11 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:54.550 10:47:11 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:54.550 10:47:11 -- nvmf/common.sh@520 -- # config=() 00:16:54.550 10:47:11 -- nvmf/common.sh@520 -- # local subsystem config 00:16:54.550 10:47:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:54.550 10:47:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:54.550 { 00:16:54.550 "params": { 00:16:54.550 "name": "Nvme$subsystem", 00:16:54.550 "trtype": "$TEST_TRANSPORT", 00:16:54.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.550 "adrfam": "ipv4", 00:16:54.550 "trsvcid": "$NVMF_PORT", 00:16:54.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.550 "hdgst": ${hdgst:-false}, 00:16:54.550 "ddgst": ${ddgst:-false} 00:16:54.550 }, 00:16:54.550 "method": "bdev_nvme_attach_controller" 00:16:54.550 } 00:16:54.550 EOF 00:16:54.550 )") 00:16:54.550 10:47:11 -- nvmf/common.sh@542 -- # cat 00:16:54.550 10:47:11 -- nvmf/common.sh@544 -- # jq . 00:16:54.550 10:47:11 -- nvmf/common.sh@545 -- # IFS=, 00:16:54.550 10:47:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:54.550 "params": { 00:16:54.550 "name": "Nvme0", 00:16:54.550 "trtype": "tcp", 00:16:54.550 "traddr": "10.0.0.2", 00:16:54.551 "adrfam": "ipv4", 00:16:54.551 "trsvcid": "4420", 00:16:54.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:54.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:54.551 "hdgst": false, 00:16:54.551 "ddgst": false 00:16:54.551 }, 00:16:54.551 "method": "bdev_nvme_attach_controller" 00:16:54.551 }' 00:16:54.551 [2024-07-10 10:47:11.175206] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:54.551 [2024-07-10 10:47:11.175300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3442703 ] 00:16:54.551 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.551 [2024-07-10 10:47:11.236642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.551 [2024-07-10 10:47:11.323053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.808 Running I/O for 1 seconds... 00:16:55.741 00:16:55.741 Latency(us) 00:16:55.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.741 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:55.741 Verification LBA range: start 0x0 length 0x400 00:16:55.741 Nvme0n1 : 1.01 3183.68 198.98 0.00 0.00 19834.34 1407.81 26408.58 00:16:55.741 =================================================================================================================== 00:16:55.741 Total : 3183.68 198.98 0.00 0.00 19834.34 1407.81 26408.58 00:16:55.999 10:47:12 -- target/host_management.sh@101 -- # stoptarget 00:16:55.999 10:47:12 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:55.999 10:47:12 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:55.999 10:47:12 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:56.000 10:47:12 -- target/host_management.sh@40 -- # nvmftestfini 00:16:56.000 10:47:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:56.000 10:47:12 -- nvmf/common.sh@116 -- # sync 00:16:56.000 10:47:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:56.000 10:47:12 -- nvmf/common.sh@119 -- # set +e 00:16:56.000 10:47:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:56.000 10:47:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:56.000 rmmod nvme_tcp 00:16:56.000 rmmod nvme_fabrics 00:16:56.000 rmmod nvme_keyring 00:16:56.000 10:47:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:56.000 10:47:12 -- nvmf/common.sh@123 -- # set -e 00:16:56.000 10:47:12 -- nvmf/common.sh@124 -- # return 0 00:16:56.000 10:47:12 -- nvmf/common.sh@477 -- # '[' -n 3442240 ']' 00:16:56.000 10:47:12 -- nvmf/common.sh@478 -- # killprocess 3442240 00:16:56.000 10:47:12 -- common/autotest_common.sh@926 -- # '[' -z 3442240 ']' 00:16:56.000 10:47:12 -- common/autotest_common.sh@930 -- # kill -0 3442240 00:16:56.000 10:47:12 -- common/autotest_common.sh@931 -- # uname 00:16:56.000 10:47:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.000 10:47:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3442240 00:16:56.000 10:47:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:56.000 10:47:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:56.000 10:47:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3442240' 00:16:56.000 killing process with pid 3442240 00:16:56.000 10:47:12 -- common/autotest_common.sh@945 -- # kill 3442240 00:16:56.000 10:47:12 -- common/autotest_common.sh@950 -- # wait 3442240 00:16:56.258 [2024-07-10 10:47:13.029872] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:56.258 10:47:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:56.258 10:47:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:56.258 10:47:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:56.258 10:47:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.258 10:47:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:56.258 10:47:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.258 10:47:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.258 10:47:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.792 10:47:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:58.792 00:16:58.792 real 0m7.124s 00:16:58.792 user 0m21.970s 00:16:58.792 sys 0m1.320s 00:16:58.792 10:47:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.792 10:47:15 -- common/autotest_common.sh@10 -- # set +x 00:16:58.792 ************************************ 00:16:58.792 END TEST nvmf_host_management 00:16:58.792 ************************************ 00:16:58.792 10:47:15 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:58.792 00:16:58.792 real 0m9.261s 00:16:58.792 user 0m22.740s 00:16:58.792 sys 0m2.712s 00:16:58.792 10:47:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:58.792 10:47:15 -- common/autotest_common.sh@10 -- # set +x 00:16:58.792 ************************************ 00:16:58.792 END TEST nvmf_host_management 00:16:58.792 ************************************ 00:16:58.792 10:47:15 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:58.792 10:47:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:58.792 10:47:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:58.792 10:47:15 -- common/autotest_common.sh@10 -- # set +x 00:16:58.792 ************************************ 00:16:58.792 START TEST nvmf_lvol 00:16:58.792 ************************************ 00:16:58.792 10:47:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:58.792 * Looking for test storage... 00:16:58.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.792 10:47:15 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.792 10:47:15 -- nvmf/common.sh@7 -- # uname -s 00:16:58.792 10:47:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.792 10:47:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.792 10:47:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.792 10:47:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.792 10:47:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.792 10:47:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.792 10:47:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.792 10:47:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.792 10:47:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.792 10:47:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.792 10:47:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.792 10:47:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.792 10:47:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.792 10:47:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.792 10:47:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.792 10:47:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.792 10:47:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.792 10:47:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.792 10:47:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.792 10:47:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.792 10:47:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.792 10:47:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.792 10:47:15 -- paths/export.sh@5 -- # export PATH 00:16:58.792 10:47:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.792 10:47:15 -- nvmf/common.sh@46 -- # : 0 00:16:58.792 10:47:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:58.792 10:47:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:58.792 10:47:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:58.792 10:47:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.792 10:47:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.792 10:47:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:58.792 10:47:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:58.792 10:47:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:58.792 10:47:15 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:58.792 10:47:15 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:58.792 10:47:15 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:58.792 10:47:15 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:58.792 10:47:15 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:58.792 10:47:15 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:58.792 10:47:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:58.792 10:47:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.792 10:47:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:58.792 10:47:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:58.792 10:47:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:58.792 10:47:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.792 10:47:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.792 10:47:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.792 10:47:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:58.792 10:47:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:58.792 10:47:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:58.792 10:47:15 -- common/autotest_common.sh@10 -- # set +x 00:17:00.691 10:47:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:00.691 10:47:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:00.691 10:47:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:00.691 10:47:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:00.691 10:47:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:00.691 10:47:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:00.691 10:47:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:00.691 10:47:17 -- nvmf/common.sh@294 -- # net_devs=() 00:17:00.691 10:47:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:00.691 10:47:17 -- nvmf/common.sh@295 -- # e810=() 00:17:00.691 10:47:17 -- nvmf/common.sh@295 -- # local -ga e810 00:17:00.691 10:47:17 -- nvmf/common.sh@296 -- # x722=() 00:17:00.691 10:47:17 -- nvmf/common.sh@296 -- # local -ga x722 00:17:00.691 10:47:17 -- nvmf/common.sh@297 -- # mlx=() 00:17:00.691 10:47:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:00.691 10:47:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.691 10:47:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.691 10:47:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.691 10:47:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.691 10:47:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.691 10:47:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.691 10:47:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.691 10:47:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.691 10:47:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.691 10:47:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.691 10:47:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.691 10:47:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:00.691 10:47:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:00.691 10:47:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:00.691 10:47:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:00.691 10:47:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:00.691 10:47:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:00.691 10:47:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:00.691 10:47:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:00.691 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:00.691 10:47:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:00.691 10:47:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:00.691 10:47:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.691 10:47:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:00.692 10:47:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:00.692 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:00.692 10:47:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:00.692 10:47:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:00.692 10:47:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.692 10:47:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:00.692 10:47:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.692 10:47:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:00.692 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:00.692 10:47:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.692 10:47:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:00.692 10:47:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.692 10:47:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:00.692 10:47:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.692 10:47:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:00.692 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:00.692 10:47:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.692 10:47:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:00.692 10:47:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:00.692 10:47:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:00.692 10:47:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.692 10:47:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.692 10:47:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:00.692 10:47:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:00.692 10:47:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:00.692 10:47:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:00.692 10:47:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:00.692 10:47:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:00.692 10:47:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.692 10:47:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:00.692 10:47:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:00.692 10:47:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:00.692 10:47:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:00.692 10:47:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:00.692 10:47:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:00.692 10:47:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:00.692 10:47:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:00.692 10:47:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:00.692 10:47:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:00.692 10:47:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:00.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:17:00.692 00:17:00.692 --- 10.0.0.2 ping statistics --- 00:17:00.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.692 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:17:00.692 10:47:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:00.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:17:00.692 00:17:00.692 --- 10.0.0.1 ping statistics --- 00:17:00.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.692 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:00.692 10:47:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.692 10:47:17 -- nvmf/common.sh@410 -- # return 0 00:17:00.692 10:47:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:00.692 10:47:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.692 10:47:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:00.692 10:47:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.692 10:47:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:00.692 10:47:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:00.692 10:47:17 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:00.692 10:47:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:00.692 10:47:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:00.692 10:47:17 -- common/autotest_common.sh@10 -- # set +x 00:17:00.692 10:47:17 -- nvmf/common.sh@469 -- # nvmfpid=3444817 00:17:00.692 10:47:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:00.692 10:47:17 -- nvmf/common.sh@470 -- # waitforlisten 3444817 00:17:00.692 10:47:17 -- common/autotest_common.sh@819 -- # '[' -z 3444817 ']' 00:17:00.692 10:47:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.692 10:47:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:00.692 10:47:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.692 10:47:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:00.692 10:47:17 -- common/autotest_common.sh@10 -- # set +x 00:17:00.692 [2024-07-10 10:47:17.354069] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:00.692 [2024-07-10 10:47:17.354140] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.692 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.692 [2024-07-10 10:47:17.423902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:00.950 [2024-07-10 10:47:17.519619] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:00.950 [2024-07-10 10:47:17.519798] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.950 [2024-07-10 10:47:17.519818] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.950 [2024-07-10 10:47:17.519833] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.950 [2024-07-10 10:47:17.522448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.950 [2024-07-10 10:47:17.522518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.950 [2024-07-10 10:47:17.522522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.515 10:47:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:01.515 10:47:18 -- common/autotest_common.sh@852 -- # return 0 00:17:01.515 10:47:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:01.515 10:47:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:01.515 10:47:18 -- common/autotest_common.sh@10 -- # set +x 00:17:01.772 10:47:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.772 10:47:18 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:02.029 [2024-07-10 10:47:18.614303] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.029 10:47:18 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:02.287 10:47:18 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:02.287 10:47:18 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:02.545 10:47:19 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:02.545 10:47:19 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:02.802 10:47:19 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:03.059 10:47:19 -- target/nvmf_lvol.sh@29 -- # lvs=da814011-c65c-42e1-b338-5a2f776aa4d3 00:17:03.059 10:47:19 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u da814011-c65c-42e1-b338-5a2f776aa4d3 lvol 20 00:17:03.316 10:47:19 -- target/nvmf_lvol.sh@32 -- # lvol=ad68b98e-3860-4870-acb1-90a93fa112e2 00:17:03.316 10:47:19 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:03.573 10:47:20 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ad68b98e-3860-4870-acb1-90a93fa112e2 00:17:03.830 10:47:20 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:03.830 [2024-07-10 10:47:20.629091] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.830 10:47:20 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:04.087 10:47:20 -- target/nvmf_lvol.sh@42 -- # perf_pid=3445358 00:17:04.087 10:47:20 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:04.087 10:47:20 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:04.344 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.275 10:47:21 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ad68b98e-3860-4870-acb1-90a93fa112e2 MY_SNAPSHOT 00:17:05.532 10:47:22 -- target/nvmf_lvol.sh@47 -- # snapshot=c09f0118-b02b-44fc-8eef-9278de220a3a 00:17:05.532 10:47:22 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ad68b98e-3860-4870-acb1-90a93fa112e2 30 00:17:05.789 10:47:22 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c09f0118-b02b-44fc-8eef-9278de220a3a MY_CLONE 00:17:06.046 10:47:22 -- target/nvmf_lvol.sh@49 -- # clone=3089fad1-a779-48d8-ab32-a88ca491f338 00:17:06.046 10:47:22 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3089fad1-a779-48d8-ab32-a88ca491f338 00:17:06.610 10:47:23 -- target/nvmf_lvol.sh@53 -- # wait 3445358 00:17:14.715 Initializing NVMe Controllers 00:17:14.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:14.715 Controller IO queue size 128, less than required. 00:17:14.715 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:14.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:14.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:14.715 Initialization complete. Launching workers. 00:17:14.715 ======================================================== 00:17:14.715 Latency(us) 00:17:14.715 Device Information : IOPS MiB/s Average min max 00:17:14.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11039.80 43.12 11599.25 2076.16 58658.88 00:17:14.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10887.20 42.53 11756.48 2086.63 60263.12 00:17:14.715 ======================================================== 00:17:14.715 Total : 21927.00 85.65 11677.32 2076.16 60263.12 00:17:14.715 00:17:14.715 10:47:31 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:14.972 10:47:31 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ad68b98e-3860-4870-acb1-90a93fa112e2 00:17:15.229 10:47:31 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da814011-c65c-42e1-b338-5a2f776aa4d3 00:17:15.486 10:47:32 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:15.486 10:47:32 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:15.486 10:47:32 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:15.486 10:47:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:15.486 10:47:32 -- nvmf/common.sh@116 -- # sync 00:17:15.486 10:47:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:15.486 10:47:32 -- nvmf/common.sh@119 -- # set +e 00:17:15.486 10:47:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:15.486 10:47:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:15.486 rmmod nvme_tcp 00:17:15.486 rmmod nvme_fabrics 00:17:15.486 rmmod nvme_keyring 00:17:15.486 10:47:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:15.486 10:47:32 -- nvmf/common.sh@123 -- # set -e 00:17:15.486 10:47:32 -- nvmf/common.sh@124 -- # return 0 00:17:15.486 10:47:32 -- nvmf/common.sh@477 -- # '[' -n 3444817 ']' 00:17:15.486 10:47:32 -- nvmf/common.sh@478 -- # killprocess 3444817 00:17:15.486 10:47:32 -- common/autotest_common.sh@926 -- # '[' -z 3444817 ']' 00:17:15.486 10:47:32 -- common/autotest_common.sh@930 -- # kill -0 3444817 00:17:15.486 10:47:32 -- common/autotest_common.sh@931 -- # uname 00:17:15.486 10:47:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:15.486 10:47:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3444817 00:17:15.486 10:47:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:15.486 10:47:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:15.486 10:47:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3444817' 00:17:15.486 killing process with pid 3444817 00:17:15.486 10:47:32 -- common/autotest_common.sh@945 -- # kill 3444817 00:17:15.486 10:47:32 -- common/autotest_common.sh@950 -- # wait 3444817 00:17:16.053 10:47:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:16.053 10:47:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:16.053 10:47:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:16.053 10:47:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.053 10:47:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:16.053 10:47:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.053 10:47:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.053 10:47:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.955 10:47:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:17.955 00:17:17.955 real 0m19.468s 00:17:17.955 user 1m6.398s 00:17:17.955 sys 0m5.569s 00:17:17.955 10:47:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.955 10:47:34 -- common/autotest_common.sh@10 -- # set +x 00:17:17.955 ************************************ 00:17:17.955 END TEST nvmf_lvol 00:17:17.955 ************************************ 00:17:17.955 10:47:34 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:17.955 10:47:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:17.955 10:47:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:17.955 10:47:34 -- common/autotest_common.sh@10 -- # set +x 00:17:17.955 ************************************ 00:17:17.955 START TEST nvmf_lvs_grow 00:17:17.955 ************************************ 00:17:17.955 10:47:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:17.955 * Looking for test storage... 00:17:17.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:17.955 10:47:34 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.955 10:47:34 -- nvmf/common.sh@7 -- # uname -s 00:17:17.955 10:47:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.955 10:47:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.955 10:47:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.955 10:47:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.955 10:47:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.955 10:47:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.955 10:47:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.955 10:47:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.955 10:47:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.955 10:47:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.955 10:47:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.955 10:47:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.955 10:47:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.955 10:47:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.955 10:47:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.955 10:47:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:17.955 10:47:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.955 10:47:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.955 10:47:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.955 10:47:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.955 10:47:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.955 10:47:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.955 10:47:34 -- paths/export.sh@5 -- # export PATH 00:17:17.955 10:47:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.955 10:47:34 -- nvmf/common.sh@46 -- # : 0 00:17:17.955 10:47:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:17.955 10:47:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:17.955 10:47:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:17.955 10:47:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.955 10:47:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.955 10:47:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:17.955 10:47:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:17.955 10:47:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:17.955 10:47:34 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.955 10:47:34 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.955 10:47:34 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:17.955 10:47:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:17.955 10:47:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.955 10:47:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:17.955 10:47:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:17.955 10:47:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:17.955 10:47:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.955 10:47:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.955 10:47:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.955 10:47:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:17.955 10:47:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:17.955 10:47:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:17.955 10:47:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.854 10:47:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:19.854 10:47:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:19.854 10:47:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:19.854 10:47:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:19.854 10:47:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:19.854 10:47:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:19.854 10:47:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:19.854 10:47:36 -- nvmf/common.sh@294 -- # net_devs=() 00:17:19.854 10:47:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:19.854 10:47:36 -- nvmf/common.sh@295 -- # e810=() 00:17:19.854 10:47:36 -- nvmf/common.sh@295 -- # local -ga e810 00:17:19.854 10:47:36 -- nvmf/common.sh@296 -- # x722=() 00:17:19.854 10:47:36 -- nvmf/common.sh@296 -- # local -ga x722 00:17:19.854 10:47:36 -- nvmf/common.sh@297 -- # mlx=() 00:17:19.854 10:47:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:19.854 10:47:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.854 10:47:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.854 10:47:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.854 10:47:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.854 10:47:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.854 10:47:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.854 10:47:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.854 10:47:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.854 10:47:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.854 10:47:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.854 10:47:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.854 10:47:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:19.854 10:47:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:19.854 10:47:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:19.854 10:47:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:19.854 10:47:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:19.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:19.854 10:47:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:19.854 10:47:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:19.854 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:19.854 10:47:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:19.854 10:47:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:19.854 10:47:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.854 10:47:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:19.854 10:47:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.854 10:47:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:19.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:19.854 10:47:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.854 10:47:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:19.854 10:47:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.854 10:47:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:19.854 10:47:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.854 10:47:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:19.854 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:19.854 10:47:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.854 10:47:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:19.854 10:47:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:19.854 10:47:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:19.854 10:47:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:19.854 10:47:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.854 10:47:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.854 10:47:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.854 10:47:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:19.854 10:47:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.854 10:47:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.854 10:47:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:19.854 10:47:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.854 10:47:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.854 10:47:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:19.854 10:47:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:20.112 10:47:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.112 10:47:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.112 10:47:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.112 10:47:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.112 10:47:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:20.112 10:47:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.112 10:47:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.112 10:47:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.112 10:47:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:20.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:17:20.112 00:17:20.112 --- 10.0.0.2 ping statistics --- 00:17:20.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.112 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:17:20.112 10:47:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:17:20.112 00:17:20.112 --- 10.0.0.1 ping statistics --- 00:17:20.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.112 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:20.112 10:47:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.112 10:47:36 -- nvmf/common.sh@410 -- # return 0 00:17:20.112 10:47:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:20.112 10:47:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.112 10:47:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:20.112 10:47:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:20.112 10:47:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.112 10:47:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:20.112 10:47:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:20.112 10:47:36 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:20.112 10:47:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:20.112 10:47:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:20.112 10:47:36 -- common/autotest_common.sh@10 -- # set +x 00:17:20.112 10:47:36 -- nvmf/common.sh@469 -- # nvmfpid=3448691 00:17:20.112 10:47:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:20.112 10:47:36 -- nvmf/common.sh@470 -- # waitforlisten 3448691 00:17:20.112 10:47:36 -- common/autotest_common.sh@819 -- # '[' -z 3448691 ']' 00:17:20.112 10:47:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.112 10:47:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:20.112 10:47:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.112 10:47:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:20.112 10:47:36 -- common/autotest_common.sh@10 -- # set +x 00:17:20.112 [2024-07-10 10:47:36.864492] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:20.112 [2024-07-10 10:47:36.864561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.112 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.112 [2024-07-10 10:47:36.926522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.369 [2024-07-10 10:47:37.009256] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.369 [2024-07-10 10:47:37.009409] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.369 [2024-07-10 10:47:37.009453] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.369 [2024-07-10 10:47:37.009466] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.369 [2024-07-10 10:47:37.009510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.301 10:47:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:21.301 10:47:37 -- common/autotest_common.sh@852 -- # return 0 00:17:21.301 10:47:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:21.301 10:47:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:21.301 10:47:37 -- common/autotest_common.sh@10 -- # set +x 00:17:21.301 10:47:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.301 10:47:37 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:21.301 [2024-07-10 10:47:38.114198] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.558 10:47:38 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:21.558 10:47:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:21.558 10:47:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:21.558 10:47:38 -- common/autotest_common.sh@10 -- # set +x 00:17:21.558 ************************************ 00:17:21.558 START TEST lvs_grow_clean 00:17:21.558 ************************************ 00:17:21.558 10:47:38 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:21.558 10:47:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:21.558 10:47:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:21.558 10:47:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:21.558 10:47:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:21.558 10:47:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:21.558 10:47:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:21.558 10:47:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:21.558 10:47:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:21.558 10:47:38 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:21.815 10:47:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:21.815 10:47:38 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:21.815 10:47:38 -- target/nvmf_lvs_grow.sh@28 -- # lvs=6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:21.815 10:47:38 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:21.815 10:47:38 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:22.073 10:47:38 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:22.073 10:47:38 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:22.073 10:47:38 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6bbabe77-3642-4dcc-9a70-367b9491962b lvol 150 00:17:22.330 10:47:39 -- target/nvmf_lvs_grow.sh@33 -- # lvol=0778254e-dab9-4598-b4bf-0e44d29843e8 00:17:22.330 10:47:39 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:22.330 10:47:39 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:22.587 [2024-07-10 10:47:39.332518] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:22.587 [2024-07-10 10:47:39.332625] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:22.587 true 00:17:22.587 10:47:39 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:22.587 10:47:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:22.844 10:47:39 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:22.844 10:47:39 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:23.102 10:47:39 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0778254e-dab9-4598-b4bf-0e44d29843e8 00:17:23.360 10:47:40 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:23.617 [2024-07-10 10:47:40.327623] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.617 10:47:40 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:23.874 10:47:40 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3449145 00:17:23.874 10:47:40 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:23.874 10:47:40 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:23.874 10:47:40 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3449145 /var/tmp/bdevperf.sock 00:17:23.874 10:47:40 -- common/autotest_common.sh@819 -- # '[' -z 3449145 ']' 00:17:23.874 10:47:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.874 10:47:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:23.874 10:47:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.874 10:47:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:23.874 10:47:40 -- common/autotest_common.sh@10 -- # set +x 00:17:23.874 [2024-07-10 10:47:40.616988] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:23.874 [2024-07-10 10:47:40.617056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449145 ] 00:17:23.874 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.874 [2024-07-10 10:47:40.677608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.133 [2024-07-10 10:47:40.767683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.086 10:47:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:25.086 10:47:41 -- common/autotest_common.sh@852 -- # return 0 00:17:25.086 10:47:41 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:25.344 Nvme0n1 00:17:25.344 10:47:42 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:25.601 [ 00:17:25.601 { 00:17:25.601 "name": "Nvme0n1", 00:17:25.601 "aliases": [ 00:17:25.601 "0778254e-dab9-4598-b4bf-0e44d29843e8" 00:17:25.601 ], 00:17:25.601 "product_name": "NVMe disk", 00:17:25.602 "block_size": 4096, 00:17:25.602 "num_blocks": 38912, 00:17:25.602 "uuid": "0778254e-dab9-4598-b4bf-0e44d29843e8", 00:17:25.602 "assigned_rate_limits": { 00:17:25.602 "rw_ios_per_sec": 0, 00:17:25.602 "rw_mbytes_per_sec": 0, 00:17:25.602 "r_mbytes_per_sec": 0, 00:17:25.602 "w_mbytes_per_sec": 0 00:17:25.602 }, 00:17:25.602 "claimed": false, 00:17:25.602 "zoned": false, 00:17:25.602 "supported_io_types": { 00:17:25.602 "read": true, 00:17:25.602 "write": true, 00:17:25.602 "unmap": true, 00:17:25.602 "write_zeroes": true, 00:17:25.602 "flush": true, 00:17:25.602 "reset": true, 00:17:25.602 "compare": true, 00:17:25.602 "compare_and_write": true, 00:17:25.602 "abort": true, 00:17:25.602 "nvme_admin": true, 00:17:25.602 "nvme_io": true 00:17:25.602 }, 00:17:25.602 "driver_specific": { 00:17:25.602 "nvme": [ 00:17:25.602 { 00:17:25.602 "trid": { 00:17:25.602 "trtype": "TCP", 00:17:25.602 "adrfam": "IPv4", 00:17:25.602 "traddr": "10.0.0.2", 00:17:25.602 "trsvcid": "4420", 00:17:25.602 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:25.602 }, 00:17:25.602 "ctrlr_data": { 00:17:25.602 "cntlid": 1, 00:17:25.602 "vendor_id": "0x8086", 00:17:25.602 "model_number": "SPDK bdev Controller", 00:17:25.602 "serial_number": "SPDK0", 00:17:25.602 "firmware_revision": "24.01.1", 00:17:25.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:25.602 "oacs": { 00:17:25.602 "security": 0, 00:17:25.602 "format": 0, 00:17:25.602 "firmware": 0, 00:17:25.602 "ns_manage": 0 00:17:25.602 }, 00:17:25.602 "multi_ctrlr": true, 00:17:25.602 "ana_reporting": false 00:17:25.602 }, 00:17:25.602 "vs": { 00:17:25.602 "nvme_version": "1.3" 00:17:25.602 }, 00:17:25.602 "ns_data": { 00:17:25.602 "id": 1, 00:17:25.602 "can_share": true 00:17:25.602 } 00:17:25.602 } 00:17:25.602 ], 00:17:25.602 "mp_policy": "active_passive" 00:17:25.602 } 00:17:25.602 } 00:17:25.602 ] 00:17:25.602 10:47:42 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3449300 00:17:25.602 10:47:42 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:25.602 10:47:42 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:25.602 Running I/O for 10 seconds... 00:17:26.974 Latency(us) 00:17:26.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.974 Nvme0n1 : 1.00 12448.00 48.62 0.00 0.00 0.00 0.00 0.00 00:17:26.974 =================================================================================================================== 00:17:26.974 Total : 12448.00 48.62 0.00 0.00 0.00 0.00 0.00 00:17:26.974 00:17:27.538 10:47:44 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:27.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.796 Nvme0n1 : 2.00 12605.50 49.24 0.00 0.00 0.00 0.00 0.00 00:17:27.796 =================================================================================================================== 00:17:27.796 Total : 12605.50 49.24 0.00 0.00 0.00 0.00 0.00 00:17:27.796 00:17:27.796 true 00:17:27.796 10:47:44 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:27.796 10:47:44 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:28.054 10:47:44 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:28.054 10:47:44 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:28.054 10:47:44 -- target/nvmf_lvs_grow.sh@65 -- # wait 3449300 00:17:28.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.620 Nvme0n1 : 3.00 12658.33 49.45 0.00 0.00 0.00 0.00 0.00 00:17:28.620 =================================================================================================================== 00:17:28.620 Total : 12658.33 49.45 0.00 0.00 0.00 0.00 0.00 00:17:28.620 00:17:29.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.996 Nvme0n1 : 4.00 12732.50 49.74 0.00 0.00 0.00 0.00 0.00 00:17:29.996 =================================================================================================================== 00:17:29.996 Total : 12732.50 49.74 0.00 0.00 0.00 0.00 0.00 00:17:29.996 00:17:30.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.928 Nvme0n1 : 5.00 12776.80 49.91 0.00 0.00 0.00 0.00 0.00 00:17:30.928 =================================================================================================================== 00:17:30.928 Total : 12776.80 49.91 0.00 0.00 0.00 0.00 0.00 00:17:30.928 00:17:31.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.917 Nvme0n1 : 6.00 12785.17 49.94 0.00 0.00 0.00 0.00 0.00 00:17:31.917 =================================================================================================================== 00:17:31.917 Total : 12785.17 49.94 0.00 0.00 0.00 0.00 0.00 00:17:31.917 00:17:32.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.849 Nvme0n1 : 7.00 12809.29 50.04 0.00 0.00 0.00 0.00 0.00 00:17:32.849 =================================================================================================================== 00:17:32.849 Total : 12809.29 50.04 0.00 0.00 0.00 0.00 0.00 00:17:32.849 00:17:33.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.781 Nvme0n1 : 8.00 12867.25 50.26 0.00 0.00 0.00 0.00 0.00 00:17:33.781 =================================================================================================================== 00:17:33.781 Total : 12867.25 50.26 0.00 0.00 0.00 0.00 0.00 00:17:33.781 00:17:34.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.736 Nvme0n1 : 9.00 12919.22 50.47 0.00 0.00 0.00 0.00 0.00 00:17:34.736 =================================================================================================================== 00:17:34.736 Total : 12919.22 50.47 0.00 0.00 0.00 0.00 0.00 00:17:34.736 00:17:35.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.670 Nvme0n1 : 10.00 12935.70 50.53 0.00 0.00 0.00 0.00 0.00 00:17:35.670 =================================================================================================================== 00:17:35.670 Total : 12935.70 50.53 0.00 0.00 0.00 0.00 0.00 00:17:35.670 00:17:35.670 00:17:35.670 Latency(us) 00:17:35.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.670 Nvme0n1 : 10.01 12940.32 50.55 0.00 0.00 9886.88 6407.96 26020.22 00:17:35.670 =================================================================================================================== 00:17:35.671 Total : 12940.32 50.55 0.00 0.00 9886.88 6407.96 26020.22 00:17:35.671 0 00:17:35.671 10:47:52 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3449145 00:17:35.671 10:47:52 -- common/autotest_common.sh@926 -- # '[' -z 3449145 ']' 00:17:35.671 10:47:52 -- common/autotest_common.sh@930 -- # kill -0 3449145 00:17:35.671 10:47:52 -- common/autotest_common.sh@931 -- # uname 00:17:35.671 10:47:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:35.671 10:47:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3449145 00:17:35.671 10:47:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:35.671 10:47:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:35.671 10:47:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3449145' 00:17:35.671 killing process with pid 3449145 00:17:35.671 10:47:52 -- common/autotest_common.sh@945 -- # kill 3449145 00:17:35.671 Received shutdown signal, test time was about 10.000000 seconds 00:17:35.671 00:17:35.671 Latency(us) 00:17:35.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.671 =================================================================================================================== 00:17:35.671 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.671 10:47:52 -- common/autotest_common.sh@950 -- # wait 3449145 00:17:35.928 10:47:52 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:36.185 10:47:52 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:36.185 10:47:52 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:36.442 10:47:53 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:36.442 10:47:53 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:36.442 10:47:53 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:36.699 [2024-07-10 10:47:53.487388] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:36.699 10:47:53 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:36.699 10:47:53 -- common/autotest_common.sh@640 -- # local es=0 00:17:36.699 10:47:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:36.699 10:47:53 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.699 10:47:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:36.699 10:47:53 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.957 10:47:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:36.957 10:47:53 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.957 10:47:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:36.957 10:47:53 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.957 10:47:53 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:36.957 10:47:53 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:36.957 request: 00:17:36.957 { 00:17:36.957 "uuid": "6bbabe77-3642-4dcc-9a70-367b9491962b", 00:17:36.957 "method": "bdev_lvol_get_lvstores", 00:17:36.957 "req_id": 1 00:17:36.957 } 00:17:36.957 Got JSON-RPC error response 00:17:36.957 response: 00:17:36.957 { 00:17:36.957 "code": -19, 00:17:36.957 "message": "No such device" 00:17:36.957 } 00:17:36.957 10:47:53 -- common/autotest_common.sh@643 -- # es=1 00:17:36.957 10:47:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:36.957 10:47:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:36.957 10:47:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:36.957 10:47:53 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:37.213 aio_bdev 00:17:37.213 10:47:54 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 0778254e-dab9-4598-b4bf-0e44d29843e8 00:17:37.213 10:47:54 -- common/autotest_common.sh@887 -- # local bdev_name=0778254e-dab9-4598-b4bf-0e44d29843e8 00:17:37.213 10:47:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:37.213 10:47:54 -- common/autotest_common.sh@889 -- # local i 00:17:37.213 10:47:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:37.213 10:47:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:37.213 10:47:54 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:37.470 10:47:54 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0778254e-dab9-4598-b4bf-0e44d29843e8 -t 2000 00:17:37.728 [ 00:17:37.728 { 00:17:37.728 "name": "0778254e-dab9-4598-b4bf-0e44d29843e8", 00:17:37.728 "aliases": [ 00:17:37.728 "lvs/lvol" 00:17:37.728 ], 00:17:37.728 "product_name": "Logical Volume", 00:17:37.728 "block_size": 4096, 00:17:37.728 "num_blocks": 38912, 00:17:37.728 "uuid": "0778254e-dab9-4598-b4bf-0e44d29843e8", 00:17:37.728 "assigned_rate_limits": { 00:17:37.728 "rw_ios_per_sec": 0, 00:17:37.728 "rw_mbytes_per_sec": 0, 00:17:37.728 "r_mbytes_per_sec": 0, 00:17:37.728 "w_mbytes_per_sec": 0 00:17:37.728 }, 00:17:37.728 "claimed": false, 00:17:37.728 "zoned": false, 00:17:37.728 "supported_io_types": { 00:17:37.728 "read": true, 00:17:37.728 "write": true, 00:17:37.728 "unmap": true, 00:17:37.728 "write_zeroes": true, 00:17:37.728 "flush": false, 00:17:37.728 "reset": true, 00:17:37.728 "compare": false, 00:17:37.728 "compare_and_write": false, 00:17:37.728 "abort": false, 00:17:37.728 "nvme_admin": false, 00:17:37.728 "nvme_io": false 00:17:37.728 }, 00:17:37.728 "driver_specific": { 00:17:37.728 "lvol": { 00:17:37.728 "lvol_store_uuid": "6bbabe77-3642-4dcc-9a70-367b9491962b", 00:17:37.728 "base_bdev": "aio_bdev", 00:17:37.728 "thin_provision": false, 00:17:37.728 "snapshot": false, 00:17:37.728 "clone": false, 00:17:37.728 "esnap_clone": false 00:17:37.728 } 00:17:37.728 } 00:17:37.728 } 00:17:37.728 ] 00:17:37.728 10:47:54 -- common/autotest_common.sh@895 -- # return 0 00:17:37.728 10:47:54 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:37.728 10:47:54 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:37.985 10:47:54 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:37.985 10:47:54 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:37.985 10:47:54 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:38.243 10:47:55 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:38.243 10:47:55 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0778254e-dab9-4598-b4bf-0e44d29843e8 00:17:38.501 10:47:55 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6bbabe77-3642-4dcc-9a70-367b9491962b 00:17:38.758 10:47:55 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:39.016 10:47:55 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.016 00:17:39.016 real 0m17.614s 00:17:39.016 user 0m17.155s 00:17:39.016 sys 0m1.950s 00:17:39.016 10:47:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.016 10:47:55 -- common/autotest_common.sh@10 -- # set +x 00:17:39.016 ************************************ 00:17:39.016 END TEST lvs_grow_clean 00:17:39.016 ************************************ 00:17:39.016 10:47:55 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:39.016 10:47:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:39.016 10:47:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:39.016 10:47:55 -- common/autotest_common.sh@10 -- # set +x 00:17:39.016 ************************************ 00:17:39.016 START TEST lvs_grow_dirty 00:17:39.016 ************************************ 00:17:39.016 10:47:55 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:39.016 10:47:55 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:39.016 10:47:55 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:39.016 10:47:55 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:39.016 10:47:55 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:39.016 10:47:55 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:39.016 10:47:55 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:39.016 10:47:55 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.016 10:47:55 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.016 10:47:55 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:39.274 10:47:56 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:39.274 10:47:56 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:39.838 10:47:56 -- target/nvmf_lvs_grow.sh@28 -- # lvs=7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:39.838 10:47:56 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:39.838 10:47:56 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:39.838 10:47:56 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:39.838 10:47:56 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:39.838 10:47:56 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 lvol 150 00:17:40.095 10:47:56 -- target/nvmf_lvs_grow.sh@33 -- # lvol=5ad86f49-14c3-472c-97f2-50e362cab542 00:17:40.095 10:47:56 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:40.095 10:47:56 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:40.353 [2024-07-10 10:47:57.107818] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:40.353 [2024-07-10 10:47:57.107915] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:40.353 true 00:17:40.353 10:47:57 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:40.353 10:47:57 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:40.610 10:47:57 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:40.610 10:47:57 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:40.910 10:47:57 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ad86f49-14c3-472c-97f2-50e362cab542 00:17:41.168 10:47:57 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:41.425 10:47:58 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:41.682 10:47:58 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3451382 00:17:41.682 10:47:58 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:41.682 10:47:58 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:41.683 10:47:58 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3451382 /var/tmp/bdevperf.sock 00:17:41.683 10:47:58 -- common/autotest_common.sh@819 -- # '[' -z 3451382 ']' 00:17:41.683 10:47:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.683 10:47:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:41.683 10:47:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.683 10:47:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:41.683 10:47:58 -- common/autotest_common.sh@10 -- # set +x 00:17:41.683 [2024-07-10 10:47:58.372286] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:41.683 [2024-07-10 10:47:58.372369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451382 ] 00:17:41.683 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.683 [2024-07-10 10:47:58.432226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.940 [2024-07-10 10:47:58.522048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.872 10:47:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:42.872 10:47:59 -- common/autotest_common.sh@852 -- # return 0 00:17:42.872 10:47:59 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:42.872 Nvme0n1 00:17:42.872 10:47:59 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:43.128 [ 00:17:43.128 { 00:17:43.128 "name": "Nvme0n1", 00:17:43.128 "aliases": [ 00:17:43.128 "5ad86f49-14c3-472c-97f2-50e362cab542" 00:17:43.128 ], 00:17:43.128 "product_name": "NVMe disk", 00:17:43.128 "block_size": 4096, 00:17:43.128 "num_blocks": 38912, 00:17:43.128 "uuid": "5ad86f49-14c3-472c-97f2-50e362cab542", 00:17:43.128 "assigned_rate_limits": { 00:17:43.128 "rw_ios_per_sec": 0, 00:17:43.128 "rw_mbytes_per_sec": 0, 00:17:43.128 "r_mbytes_per_sec": 0, 00:17:43.128 "w_mbytes_per_sec": 0 00:17:43.128 }, 00:17:43.128 "claimed": false, 00:17:43.128 "zoned": false, 00:17:43.128 "supported_io_types": { 00:17:43.128 "read": true, 00:17:43.128 "write": true, 00:17:43.128 "unmap": true, 00:17:43.128 "write_zeroes": true, 00:17:43.128 "flush": true, 00:17:43.128 "reset": true, 00:17:43.128 "compare": true, 00:17:43.128 "compare_and_write": true, 00:17:43.128 "abort": true, 00:17:43.128 "nvme_admin": true, 00:17:43.128 "nvme_io": true 00:17:43.128 }, 00:17:43.129 "driver_specific": { 00:17:43.129 "nvme": [ 00:17:43.129 { 00:17:43.129 "trid": { 00:17:43.129 "trtype": "TCP", 00:17:43.129 "adrfam": "IPv4", 00:17:43.129 "traddr": "10.0.0.2", 00:17:43.129 "trsvcid": "4420", 00:17:43.129 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:43.129 }, 00:17:43.129 "ctrlr_data": { 00:17:43.129 "cntlid": 1, 00:17:43.129 "vendor_id": "0x8086", 00:17:43.129 "model_number": "SPDK bdev Controller", 00:17:43.129 "serial_number": "SPDK0", 00:17:43.129 "firmware_revision": "24.01.1", 00:17:43.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.129 "oacs": { 00:17:43.129 "security": 0, 00:17:43.129 "format": 0, 00:17:43.129 "firmware": 0, 00:17:43.129 "ns_manage": 0 00:17:43.129 }, 00:17:43.129 "multi_ctrlr": true, 00:17:43.129 "ana_reporting": false 00:17:43.129 }, 00:17:43.129 "vs": { 00:17:43.129 "nvme_version": "1.3" 00:17:43.129 }, 00:17:43.129 "ns_data": { 00:17:43.129 "id": 1, 00:17:43.129 "can_share": true 00:17:43.129 } 00:17:43.129 } 00:17:43.129 ], 00:17:43.129 "mp_policy": "active_passive" 00:17:43.129 } 00:17:43.129 } 00:17:43.129 ] 00:17:43.386 10:47:59 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3451532 00:17:43.386 10:47:59 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:43.386 10:47:59 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:43.386 Running I/O for 10 seconds... 00:17:44.318 Latency(us) 00:17:44.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.318 Nvme0n1 : 1.00 13626.00 53.23 0.00 0.00 0.00 0.00 0.00 00:17:44.318 =================================================================================================================== 00:17:44.318 Total : 13626.00 53.23 0.00 0.00 0.00 0.00 0.00 00:17:44.318 00:17:45.248 10:48:01 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:45.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.506 Nvme0n1 : 2.00 13757.00 53.74 0.00 0.00 0.00 0.00 0.00 00:17:45.506 =================================================================================================================== 00:17:45.506 Total : 13757.00 53.74 0.00 0.00 0.00 0.00 0.00 00:17:45.506 00:17:45.506 true 00:17:45.506 10:48:02 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:45.506 10:48:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:45.763 10:48:02 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:45.763 10:48:02 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:45.763 10:48:02 -- target/nvmf_lvs_grow.sh@65 -- # wait 3451532 00:17:46.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.329 Nvme0n1 : 3.00 13992.67 54.66 0.00 0.00 0.00 0.00 0.00 00:17:46.329 =================================================================================================================== 00:17:46.329 Total : 13992.67 54.66 0.00 0.00 0.00 0.00 0.00 00:17:46.329 00:17:47.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.262 Nvme0n1 : 4.00 14002.50 54.70 0.00 0.00 0.00 0.00 0.00 00:17:47.262 =================================================================================================================== 00:17:47.262 Total : 14002.50 54.70 0.00 0.00 0.00 0.00 0.00 00:17:47.262 00:17:48.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.634 Nvme0n1 : 5.00 14022.80 54.78 0.00 0.00 0.00 0.00 0.00 00:17:48.634 =================================================================================================================== 00:17:48.634 Total : 14022.80 54.78 0.00 0.00 0.00 0.00 0.00 00:17:48.634 00:17:49.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.565 Nvme0n1 : 6.00 14051.00 54.89 0.00 0.00 0.00 0.00 0.00 00:17:49.565 =================================================================================================================== 00:17:49.565 Total : 14051.00 54.89 0.00 0.00 0.00 0.00 0.00 00:17:49.565 00:17:50.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.496 Nvme0n1 : 7.00 14066.57 54.95 0.00 0.00 0.00 0.00 0.00 00:17:50.496 =================================================================================================================== 00:17:50.496 Total : 14066.57 54.95 0.00 0.00 0.00 0.00 0.00 00:17:50.496 00:17:51.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.449 Nvme0n1 : 8.00 14060.25 54.92 0.00 0.00 0.00 0.00 0.00 00:17:51.449 =================================================================================================================== 00:17:51.449 Total : 14060.25 54.92 0.00 0.00 0.00 0.00 0.00 00:17:51.449 00:17:52.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.381 Nvme0n1 : 9.00 14058.89 54.92 0.00 0.00 0.00 0.00 0.00 00:17:52.381 =================================================================================================================== 00:17:52.381 Total : 14058.89 54.92 0.00 0.00 0.00 0.00 0.00 00:17:52.381 00:17:53.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.314 Nvme0n1 : 10.00 14062.60 54.93 0.00 0.00 0.00 0.00 0.00 00:17:53.314 =================================================================================================================== 00:17:53.314 Total : 14062.60 54.93 0.00 0.00 0.00 0.00 0.00 00:17:53.314 00:17:53.314 00:17:53.314 Latency(us) 00:17:53.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.314 Nvme0n1 : 10.01 14062.11 54.93 0.00 0.00 9094.90 6505.05 16893.72 00:17:53.314 =================================================================================================================== 00:17:53.314 Total : 14062.11 54.93 0.00 0.00 9094.90 6505.05 16893.72 00:17:53.314 0 00:17:53.314 10:48:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3451382 00:17:53.314 10:48:10 -- common/autotest_common.sh@926 -- # '[' -z 3451382 ']' 00:17:53.314 10:48:10 -- common/autotest_common.sh@930 -- # kill -0 3451382 00:17:53.314 10:48:10 -- common/autotest_common.sh@931 -- # uname 00:17:53.314 10:48:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:53.314 10:48:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3451382 00:17:53.314 10:48:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:53.314 10:48:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:53.314 10:48:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3451382' 00:17:53.314 killing process with pid 3451382 00:17:53.314 10:48:10 -- common/autotest_common.sh@945 -- # kill 3451382 00:17:53.314 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.314 00:17:53.314 Latency(us) 00:17:53.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.314 =================================================================================================================== 00:17:53.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.314 10:48:10 -- common/autotest_common.sh@950 -- # wait 3451382 00:17:53.571 10:48:10 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:53.828 10:48:10 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:53.828 10:48:10 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:54.087 10:48:10 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:54.087 10:48:10 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:54.087 10:48:10 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3448691 00:17:54.087 10:48:10 -- target/nvmf_lvs_grow.sh@74 -- # wait 3448691 00:17:54.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3448691 Killed "${NVMF_APP[@]}" "$@" 00:17:54.087 10:48:10 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:54.087 10:48:10 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:54.087 10:48:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:54.087 10:48:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:54.087 10:48:10 -- common/autotest_common.sh@10 -- # set +x 00:17:54.087 10:48:10 -- nvmf/common.sh@469 -- # nvmfpid=3453509 00:17:54.087 10:48:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:54.087 10:48:10 -- nvmf/common.sh@470 -- # waitforlisten 3453509 00:17:54.087 10:48:10 -- common/autotest_common.sh@819 -- # '[' -z 3453509 ']' 00:17:54.087 10:48:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.087 10:48:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:54.087 10:48:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.087 10:48:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:54.087 10:48:10 -- common/autotest_common.sh@10 -- # set +x 00:17:54.087 [2024-07-10 10:48:10.910235] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:54.087 [2024-07-10 10:48:10.910329] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.345 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.345 [2024-07-10 10:48:10.975458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.345 [2024-07-10 10:48:11.062488] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:54.345 [2024-07-10 10:48:11.062639] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.345 [2024-07-10 10:48:11.062656] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.345 [2024-07-10 10:48:11.062667] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.345 [2024-07-10 10:48:11.062693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.277 10:48:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:55.277 10:48:11 -- common/autotest_common.sh@852 -- # return 0 00:17:55.277 10:48:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:55.277 10:48:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:55.277 10:48:11 -- common/autotest_common.sh@10 -- # set +x 00:17:55.277 10:48:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.277 10:48:11 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:55.277 [2024-07-10 10:48:12.076467] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:55.277 [2024-07-10 10:48:12.076613] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:55.277 [2024-07-10 10:48:12.076672] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:55.277 10:48:12 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:55.277 10:48:12 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 5ad86f49-14c3-472c-97f2-50e362cab542 00:17:55.277 10:48:12 -- common/autotest_common.sh@887 -- # local bdev_name=5ad86f49-14c3-472c-97f2-50e362cab542 00:17:55.277 10:48:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:55.535 10:48:12 -- common/autotest_common.sh@889 -- # local i 00:17:55.535 10:48:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:55.535 10:48:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:55.535 10:48:12 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:55.535 10:48:12 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ad86f49-14c3-472c-97f2-50e362cab542 -t 2000 00:17:55.793 [ 00:17:55.793 { 00:17:55.794 "name": "5ad86f49-14c3-472c-97f2-50e362cab542", 00:17:55.794 "aliases": [ 00:17:55.794 "lvs/lvol" 00:17:55.794 ], 00:17:55.794 "product_name": "Logical Volume", 00:17:55.794 "block_size": 4096, 00:17:55.794 "num_blocks": 38912, 00:17:55.794 "uuid": "5ad86f49-14c3-472c-97f2-50e362cab542", 00:17:55.794 "assigned_rate_limits": { 00:17:55.794 "rw_ios_per_sec": 0, 00:17:55.794 "rw_mbytes_per_sec": 0, 00:17:55.794 "r_mbytes_per_sec": 0, 00:17:55.794 "w_mbytes_per_sec": 0 00:17:55.794 }, 00:17:55.794 "claimed": false, 00:17:55.794 "zoned": false, 00:17:55.794 "supported_io_types": { 00:17:55.794 "read": true, 00:17:55.794 "write": true, 00:17:55.794 "unmap": true, 00:17:55.794 "write_zeroes": true, 00:17:55.794 "flush": false, 00:17:55.794 "reset": true, 00:17:55.794 "compare": false, 00:17:55.794 "compare_and_write": false, 00:17:55.794 "abort": false, 00:17:55.794 "nvme_admin": false, 00:17:55.794 "nvme_io": false 00:17:55.794 }, 00:17:55.794 "driver_specific": { 00:17:55.794 "lvol": { 00:17:55.794 "lvol_store_uuid": "7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0", 00:17:55.794 "base_bdev": "aio_bdev", 00:17:55.794 "thin_provision": false, 00:17:55.794 "snapshot": false, 00:17:55.794 "clone": false, 00:17:55.794 "esnap_clone": false 00:17:55.794 } 00:17:55.794 } 00:17:55.794 } 00:17:55.794 ] 00:17:55.794 10:48:12 -- common/autotest_common.sh@895 -- # return 0 00:17:55.794 10:48:12 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:55.794 10:48:12 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:56.052 10:48:12 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:56.052 10:48:12 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:56.052 10:48:12 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:56.309 10:48:13 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:56.309 10:48:13 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:56.566 [2024-07-10 10:48:13.269291] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:56.566 10:48:13 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:56.566 10:48:13 -- common/autotest_common.sh@640 -- # local es=0 00:17:56.566 10:48:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:56.566 10:48:13 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.566 10:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.566 10:48:13 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.566 10:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.566 10:48:13 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.566 10:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.566 10:48:13 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.566 10:48:13 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:56.566 10:48:13 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:56.824 request: 00:17:56.824 { 00:17:56.824 "uuid": "7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0", 00:17:56.824 "method": "bdev_lvol_get_lvstores", 00:17:56.824 "req_id": 1 00:17:56.824 } 00:17:56.824 Got JSON-RPC error response 00:17:56.824 response: 00:17:56.824 { 00:17:56.824 "code": -19, 00:17:56.824 "message": "No such device" 00:17:56.824 } 00:17:56.824 10:48:13 -- common/autotest_common.sh@643 -- # es=1 00:17:56.824 10:48:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:56.824 10:48:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:56.824 10:48:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:56.824 10:48:13 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:57.081 aio_bdev 00:17:57.081 10:48:13 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 5ad86f49-14c3-472c-97f2-50e362cab542 00:17:57.081 10:48:13 -- common/autotest_common.sh@887 -- # local bdev_name=5ad86f49-14c3-472c-97f2-50e362cab542 00:17:57.081 10:48:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:57.081 10:48:13 -- common/autotest_common.sh@889 -- # local i 00:17:57.081 10:48:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:57.081 10:48:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:57.081 10:48:13 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:57.339 10:48:14 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ad86f49-14c3-472c-97f2-50e362cab542 -t 2000 00:17:57.612 [ 00:17:57.612 { 00:17:57.612 "name": "5ad86f49-14c3-472c-97f2-50e362cab542", 00:17:57.612 "aliases": [ 00:17:57.612 "lvs/lvol" 00:17:57.612 ], 00:17:57.612 "product_name": "Logical Volume", 00:17:57.612 "block_size": 4096, 00:17:57.612 "num_blocks": 38912, 00:17:57.612 "uuid": "5ad86f49-14c3-472c-97f2-50e362cab542", 00:17:57.612 "assigned_rate_limits": { 00:17:57.612 "rw_ios_per_sec": 0, 00:17:57.612 "rw_mbytes_per_sec": 0, 00:17:57.612 "r_mbytes_per_sec": 0, 00:17:57.612 "w_mbytes_per_sec": 0 00:17:57.612 }, 00:17:57.612 "claimed": false, 00:17:57.612 "zoned": false, 00:17:57.612 "supported_io_types": { 00:17:57.612 "read": true, 00:17:57.612 "write": true, 00:17:57.612 "unmap": true, 00:17:57.612 "write_zeroes": true, 00:17:57.612 "flush": false, 00:17:57.612 "reset": true, 00:17:57.612 "compare": false, 00:17:57.612 "compare_and_write": false, 00:17:57.612 "abort": false, 00:17:57.612 "nvme_admin": false, 00:17:57.612 "nvme_io": false 00:17:57.612 }, 00:17:57.612 "driver_specific": { 00:17:57.612 "lvol": { 00:17:57.612 "lvol_store_uuid": "7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0", 00:17:57.612 "base_bdev": "aio_bdev", 00:17:57.612 "thin_provision": false, 00:17:57.612 "snapshot": false, 00:17:57.612 "clone": false, 00:17:57.612 "esnap_clone": false 00:17:57.612 } 00:17:57.612 } 00:17:57.612 } 00:17:57.612 ] 00:17:57.612 10:48:14 -- common/autotest_common.sh@895 -- # return 0 00:17:57.612 10:48:14 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:57.612 10:48:14 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:57.870 10:48:14 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:57.870 10:48:14 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:57.870 10:48:14 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:58.128 10:48:14 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:58.128 10:48:14 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ad86f49-14c3-472c-97f2-50e362cab542 00:17:58.386 10:48:15 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7cb76e26-ca60-4ca1-9fcd-b04b9edddfe0 00:17:58.643 10:48:15 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:58.901 10:48:15 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:58.901 00:17:58.901 real 0m19.854s 00:17:58.901 user 0m44.014s 00:17:58.901 sys 0m7.267s 00:17:58.901 10:48:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.901 10:48:15 -- common/autotest_common.sh@10 -- # set +x 00:17:58.901 ************************************ 00:17:58.901 END TEST lvs_grow_dirty 00:17:58.901 ************************************ 00:17:58.901 10:48:15 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:58.901 10:48:15 -- common/autotest_common.sh@796 -- # type=--id 00:17:58.901 10:48:15 -- common/autotest_common.sh@797 -- # id=0 00:17:58.901 10:48:15 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:58.901 10:48:15 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:58.901 10:48:15 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:58.901 10:48:15 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:58.901 10:48:15 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:58.901 10:48:15 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:58.901 nvmf_trace.0 00:17:58.901 10:48:15 -- common/autotest_common.sh@811 -- # return 0 00:17:58.901 10:48:15 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:58.901 10:48:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:58.901 10:48:15 -- nvmf/common.sh@116 -- # sync 00:17:58.901 10:48:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:58.901 10:48:15 -- nvmf/common.sh@119 -- # set +e 00:17:58.901 10:48:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:58.901 10:48:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:58.901 rmmod nvme_tcp 00:17:58.901 rmmod nvme_fabrics 00:17:58.901 rmmod nvme_keyring 00:17:59.159 10:48:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:59.159 10:48:15 -- nvmf/common.sh@123 -- # set -e 00:17:59.159 10:48:15 -- nvmf/common.sh@124 -- # return 0 00:17:59.159 10:48:15 -- nvmf/common.sh@477 -- # '[' -n 3453509 ']' 00:17:59.159 10:48:15 -- nvmf/common.sh@478 -- # killprocess 3453509 00:17:59.159 10:48:15 -- common/autotest_common.sh@926 -- # '[' -z 3453509 ']' 00:17:59.160 10:48:15 -- common/autotest_common.sh@930 -- # kill -0 3453509 00:17:59.160 10:48:15 -- common/autotest_common.sh@931 -- # uname 00:17:59.160 10:48:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:59.160 10:48:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3453509 00:17:59.160 10:48:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:59.160 10:48:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:59.160 10:48:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3453509' 00:17:59.160 killing process with pid 3453509 00:17:59.160 10:48:15 -- common/autotest_common.sh@945 -- # kill 3453509 00:17:59.160 10:48:15 -- common/autotest_common.sh@950 -- # wait 3453509 00:17:59.417 10:48:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:59.417 10:48:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:59.417 10:48:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:59.417 10:48:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.417 10:48:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:59.417 10:48:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.417 10:48:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.417 10:48:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.315 10:48:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:01.315 00:18:01.315 real 0m43.403s 00:18:01.315 user 1m7.574s 00:18:01.315 sys 0m11.054s 00:18:01.315 10:48:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.315 10:48:18 -- common/autotest_common.sh@10 -- # set +x 00:18:01.315 ************************************ 00:18:01.315 END TEST nvmf_lvs_grow 00:18:01.315 ************************************ 00:18:01.315 10:48:18 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:01.315 10:48:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:01.315 10:48:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:01.315 10:48:18 -- common/autotest_common.sh@10 -- # set +x 00:18:01.315 ************************************ 00:18:01.315 START TEST nvmf_bdev_io_wait 00:18:01.315 ************************************ 00:18:01.315 10:48:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:01.315 * Looking for test storage... 00:18:01.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:01.315 10:48:18 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.315 10:48:18 -- nvmf/common.sh@7 -- # uname -s 00:18:01.315 10:48:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.315 10:48:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.315 10:48:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.315 10:48:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.315 10:48:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.315 10:48:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.315 10:48:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.315 10:48:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.315 10:48:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.315 10:48:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.315 10:48:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.315 10:48:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.315 10:48:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.315 10:48:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.316 10:48:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:01.316 10:48:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:01.573 10:48:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.573 10:48:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.573 10:48:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.573 10:48:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.573 10:48:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.573 10:48:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.573 10:48:18 -- paths/export.sh@5 -- # export PATH 00:18:01.573 10:48:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.573 10:48:18 -- nvmf/common.sh@46 -- # : 0 00:18:01.573 10:48:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:01.573 10:48:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:01.573 10:48:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:01.573 10:48:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.573 10:48:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.573 10:48:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:01.573 10:48:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:01.573 10:48:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:01.573 10:48:18 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:01.573 10:48:18 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:01.573 10:48:18 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:01.573 10:48:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:01.573 10:48:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.573 10:48:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:01.573 10:48:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:01.573 10:48:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:01.573 10:48:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.573 10:48:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.573 10:48:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.573 10:48:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:01.573 10:48:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:01.573 10:48:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:01.573 10:48:18 -- common/autotest_common.sh@10 -- # set +x 00:18:03.471 10:48:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:03.471 10:48:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:03.471 10:48:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:03.471 10:48:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:03.471 10:48:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:03.471 10:48:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:03.471 10:48:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:03.471 10:48:20 -- nvmf/common.sh@294 -- # net_devs=() 00:18:03.471 10:48:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:03.471 10:48:20 -- nvmf/common.sh@295 -- # e810=() 00:18:03.471 10:48:20 -- nvmf/common.sh@295 -- # local -ga e810 00:18:03.471 10:48:20 -- nvmf/common.sh@296 -- # x722=() 00:18:03.471 10:48:20 -- nvmf/common.sh@296 -- # local -ga x722 00:18:03.471 10:48:20 -- nvmf/common.sh@297 -- # mlx=() 00:18:03.471 10:48:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:03.471 10:48:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.471 10:48:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.471 10:48:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.471 10:48:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.471 10:48:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.471 10:48:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.471 10:48:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.471 10:48:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.471 10:48:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.471 10:48:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.471 10:48:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.471 10:48:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:03.471 10:48:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:03.471 10:48:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:03.471 10:48:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:03.471 10:48:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:03.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:03.471 10:48:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:03.471 10:48:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:03.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:03.471 10:48:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:03.471 10:48:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:03.471 10:48:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:03.472 10:48:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:03.472 10:48:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.472 10:48:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:03.472 10:48:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.472 10:48:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:03.472 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:03.472 10:48:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.472 10:48:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:03.472 10:48:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.472 10:48:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:03.472 10:48:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.472 10:48:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:03.472 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:03.472 10:48:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.472 10:48:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:03.472 10:48:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:03.472 10:48:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:03.472 10:48:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:03.472 10:48:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:03.472 10:48:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.472 10:48:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.472 10:48:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.472 10:48:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:03.472 10:48:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.472 10:48:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.472 10:48:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:03.472 10:48:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.472 10:48:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.472 10:48:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:03.472 10:48:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:03.472 10:48:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.472 10:48:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.472 10:48:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.472 10:48:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.472 10:48:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:03.472 10:48:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.729 10:48:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.729 10:48:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.729 10:48:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:03.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:18:03.729 00:18:03.729 --- 10.0.0.2 ping statistics --- 00:18:03.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.729 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:18:03.729 10:48:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:18:03.729 00:18:03.729 --- 10.0.0.1 ping statistics --- 00:18:03.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.729 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:03.729 10:48:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.729 10:48:20 -- nvmf/common.sh@410 -- # return 0 00:18:03.729 10:48:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:03.729 10:48:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.729 10:48:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:03.730 10:48:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:03.730 10:48:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.730 10:48:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:03.730 10:48:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:03.730 10:48:20 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:03.730 10:48:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:03.730 10:48:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:03.730 10:48:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.730 10:48:20 -- nvmf/common.sh@469 -- # nvmfpid=3456064 00:18:03.730 10:48:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:03.730 10:48:20 -- nvmf/common.sh@470 -- # waitforlisten 3456064 00:18:03.730 10:48:20 -- common/autotest_common.sh@819 -- # '[' -z 3456064 ']' 00:18:03.730 10:48:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.730 10:48:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:03.730 10:48:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.730 10:48:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:03.730 10:48:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.730 [2024-07-10 10:48:20.399177] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:03.730 [2024-07-10 10:48:20.399264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.730 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.730 [2024-07-10 10:48:20.466930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.988 [2024-07-10 10:48:20.557371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:03.988 [2024-07-10 10:48:20.557540] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.988 [2024-07-10 10:48:20.557560] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.988 [2024-07-10 10:48:20.557573] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.988 [2024-07-10 10:48:20.557632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.988 [2024-07-10 10:48:20.557687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.988 [2024-07-10 10:48:20.557656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.988 [2024-07-10 10:48:20.557689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.988 10:48:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:03.988 10:48:20 -- common/autotest_common.sh@852 -- # return 0 00:18:03.988 10:48:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:03.988 10:48:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:03.988 10:48:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.988 10:48:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:03.988 10:48:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.988 10:48:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.988 10:48:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:03.988 10:48:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.988 10:48:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.988 10:48:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:03.988 10:48:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.988 10:48:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.988 [2024-07-10 10:48:20.682438] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.988 10:48:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:03.988 10:48:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.988 10:48:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.988 Malloc0 00:18:03.988 10:48:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:03.988 10:48:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.988 10:48:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.988 10:48:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:03.988 10:48:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.988 10:48:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.988 10:48:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.988 10:48:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.988 10:48:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.988 [2024-07-10 10:48:20.740657] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.988 10:48:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3456100 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@30 -- # READ_PID=3456102 00:18:03.988 10:48:20 -- nvmf/common.sh@520 -- # config=() 00:18:03.988 10:48:20 -- nvmf/common.sh@520 -- # local subsystem config 00:18:03.988 10:48:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:03.988 10:48:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:03.988 { 00:18:03.988 "params": { 00:18:03.988 "name": "Nvme$subsystem", 00:18:03.988 "trtype": "$TEST_TRANSPORT", 00:18:03.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.988 "adrfam": "ipv4", 00:18:03.988 "trsvcid": "$NVMF_PORT", 00:18:03.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.988 "hdgst": ${hdgst:-false}, 00:18:03.988 "ddgst": ${ddgst:-false} 00:18:03.988 }, 00:18:03.988 "method": "bdev_nvme_attach_controller" 00:18:03.988 } 00:18:03.988 EOF 00:18:03.988 )") 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3456104 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:03.988 10:48:20 -- nvmf/common.sh@520 -- # config=() 00:18:03.988 10:48:20 -- nvmf/common.sh@520 -- # local subsystem config 00:18:03.988 10:48:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:03.988 10:48:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:03.988 { 00:18:03.988 "params": { 00:18:03.988 "name": "Nvme$subsystem", 00:18:03.988 "trtype": "$TEST_TRANSPORT", 00:18:03.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.988 "adrfam": "ipv4", 00:18:03.988 "trsvcid": "$NVMF_PORT", 00:18:03.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.988 "hdgst": ${hdgst:-false}, 00:18:03.988 "ddgst": ${ddgst:-false} 00:18:03.988 }, 00:18:03.988 "method": "bdev_nvme_attach_controller" 00:18:03.988 } 00:18:03.988 EOF 00:18:03.988 )") 00:18:03.988 10:48:20 -- nvmf/common.sh@542 -- # cat 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3456107 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@35 -- # sync 00:18:03.988 10:48:20 -- nvmf/common.sh@520 -- # config=() 00:18:03.988 10:48:20 -- nvmf/common.sh@520 -- # local subsystem config 00:18:03.988 10:48:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:03.988 10:48:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:03.988 { 00:18:03.988 "params": { 00:18:03.988 "name": "Nvme$subsystem", 00:18:03.988 "trtype": "$TEST_TRANSPORT", 00:18:03.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.988 "adrfam": "ipv4", 00:18:03.988 "trsvcid": "$NVMF_PORT", 00:18:03.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.988 "hdgst": ${hdgst:-false}, 00:18:03.988 "ddgst": ${ddgst:-false} 00:18:03.988 }, 00:18:03.988 "method": "bdev_nvme_attach_controller" 00:18:03.988 } 00:18:03.988 EOF 00:18:03.988 )") 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:03.988 10:48:20 -- nvmf/common.sh@542 -- # cat 00:18:03.988 10:48:20 -- nvmf/common.sh@520 -- # config=() 00:18:03.988 10:48:20 -- nvmf/common.sh@520 -- # local subsystem config 00:18:03.988 10:48:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:03.988 10:48:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:03.988 { 00:18:03.988 "params": { 00:18:03.988 "name": "Nvme$subsystem", 00:18:03.988 "trtype": "$TEST_TRANSPORT", 00:18:03.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.988 "adrfam": "ipv4", 00:18:03.988 "trsvcid": "$NVMF_PORT", 00:18:03.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.988 "hdgst": ${hdgst:-false}, 00:18:03.988 "ddgst": ${ddgst:-false} 00:18:03.988 }, 00:18:03.988 "method": "bdev_nvme_attach_controller" 00:18:03.988 } 00:18:03.988 EOF 00:18:03.988 )") 00:18:03.988 10:48:20 -- nvmf/common.sh@542 -- # cat 00:18:03.988 10:48:20 -- target/bdev_io_wait.sh@37 -- # wait 3456100 00:18:03.988 10:48:20 -- nvmf/common.sh@542 -- # cat 00:18:03.988 10:48:20 -- nvmf/common.sh@544 -- # jq . 00:18:03.988 10:48:20 -- nvmf/common.sh@544 -- # jq . 00:18:03.988 10:48:20 -- nvmf/common.sh@544 -- # jq . 00:18:03.988 10:48:20 -- nvmf/common.sh@545 -- # IFS=, 00:18:03.988 10:48:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:03.988 "params": { 00:18:03.988 "name": "Nvme1", 00:18:03.988 "trtype": "tcp", 00:18:03.988 "traddr": "10.0.0.2", 00:18:03.988 "adrfam": "ipv4", 00:18:03.988 "trsvcid": "4420", 00:18:03.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.989 "hdgst": false, 00:18:03.989 "ddgst": false 00:18:03.989 }, 00:18:03.989 "method": "bdev_nvme_attach_controller" 00:18:03.989 }' 00:18:03.989 10:48:20 -- nvmf/common.sh@545 -- # IFS=, 00:18:03.989 10:48:20 -- nvmf/common.sh@544 -- # jq . 00:18:03.989 10:48:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:03.989 "params": { 00:18:03.989 "name": "Nvme1", 00:18:03.989 "trtype": "tcp", 00:18:03.989 "traddr": "10.0.0.2", 00:18:03.989 "adrfam": "ipv4", 00:18:03.989 "trsvcid": "4420", 00:18:03.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.989 "hdgst": false, 00:18:03.989 "ddgst": false 00:18:03.989 }, 00:18:03.989 "method": "bdev_nvme_attach_controller" 00:18:03.989 }' 00:18:03.989 10:48:20 -- nvmf/common.sh@545 -- # IFS=, 00:18:03.989 10:48:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:03.989 "params": { 00:18:03.989 "name": "Nvme1", 00:18:03.989 "trtype": "tcp", 00:18:03.989 "traddr": "10.0.0.2", 00:18:03.989 "adrfam": "ipv4", 00:18:03.989 "trsvcid": "4420", 00:18:03.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.989 "hdgst": false, 00:18:03.989 "ddgst": false 00:18:03.989 }, 00:18:03.989 "method": "bdev_nvme_attach_controller" 00:18:03.989 }' 00:18:03.989 10:48:20 -- nvmf/common.sh@545 -- # IFS=, 00:18:03.989 10:48:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:03.989 "params": { 00:18:03.989 "name": "Nvme1", 00:18:03.989 "trtype": "tcp", 00:18:03.989 "traddr": "10.0.0.2", 00:18:03.989 "adrfam": "ipv4", 00:18:03.989 "trsvcid": "4420", 00:18:03.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.989 "hdgst": false, 00:18:03.989 "ddgst": false 00:18:03.989 }, 00:18:03.989 "method": "bdev_nvme_attach_controller" 00:18:03.989 }' 00:18:03.989 [2024-07-10 10:48:20.784078] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:03.989 [2024-07-10 10:48:20.784078] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:03.989 [2024-07-10 10:48:20.784171] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-10 10:48:20.784171] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:03.989 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:03.989 [2024-07-10 10:48:20.784273] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:03.989 [2024-07-10 10:48:20.784272] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:03.989 [2024-07-10 10:48:20.784345] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-10 10:48:20.784345] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:03.989 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:04.246 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.246 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.246 [2024-07-10 10:48:20.969181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.246 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.246 [2024-07-10 10:48:21.042773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:04.503 [2024-07-10 10:48:21.070086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.503 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.503 [2024-07-10 10:48:21.140853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.503 [2024-07-10 10:48:21.146762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:04.503 [2024-07-10 10:48:21.209141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:04.503 [2024-07-10 10:48:21.215535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.503 [2024-07-10 10:48:21.284351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:04.760 Running I/O for 1 seconds... 00:18:04.760 Running I/O for 1 seconds... 00:18:04.760 Running I/O for 1 seconds... 00:18:04.760 Running I/O for 1 seconds... 00:18:05.696 00:18:05.696 Latency(us) 00:18:05.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.696 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:05.696 Nvme1n1 : 1.05 5075.23 19.83 0.00 0.00 24876.47 10582.85 52428.80 00:18:05.696 =================================================================================================================== 00:18:05.696 Total : 5075.23 19.83 0.00 0.00 24876.47 10582.85 52428.80 00:18:05.696 00:18:05.696 Latency(us) 00:18:05.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.696 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:05.696 Nvme1n1 : 1.01 10628.60 41.52 0.00 0.00 11988.94 3665.16 18350.08 00:18:05.696 =================================================================================================================== 00:18:05.696 Total : 10628.60 41.52 0.00 0.00 11988.94 3665.16 18350.08 00:18:05.696 00:18:05.696 Latency(us) 00:18:05.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.696 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:05.696 Nvme1n1 : 1.01 9260.37 36.17 0.00 0.00 13766.68 6893.42 28156.21 00:18:05.696 =================================================================================================================== 00:18:05.696 Total : 9260.37 36.17 0.00 0.00 13766.68 6893.42 28156.21 00:18:05.696 00:18:05.696 Latency(us) 00:18:05.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.696 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:05.696 Nvme1n1 : 1.00 194665.32 760.41 0.00 0.00 655.04 248.79 831.34 00:18:05.696 =================================================================================================================== 00:18:05.696 Total : 194665.32 760.41 0.00 0.00 655.04 248.79 831.34 00:18:06.380 10:48:22 -- target/bdev_io_wait.sh@38 -- # wait 3456102 00:18:06.380 10:48:22 -- target/bdev_io_wait.sh@39 -- # wait 3456104 00:18:06.380 10:48:22 -- target/bdev_io_wait.sh@40 -- # wait 3456107 00:18:06.380 10:48:22 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.380 10:48:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:06.380 10:48:22 -- common/autotest_common.sh@10 -- # set +x 00:18:06.380 10:48:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:06.380 10:48:22 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:06.380 10:48:22 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:06.380 10:48:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:06.380 10:48:22 -- nvmf/common.sh@116 -- # sync 00:18:06.380 10:48:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:06.380 10:48:22 -- nvmf/common.sh@119 -- # set +e 00:18:06.380 10:48:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:06.380 10:48:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:06.380 rmmod nvme_tcp 00:18:06.380 rmmod nvme_fabrics 00:18:06.380 rmmod nvme_keyring 00:18:06.381 10:48:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:06.381 10:48:22 -- nvmf/common.sh@123 -- # set -e 00:18:06.381 10:48:22 -- nvmf/common.sh@124 -- # return 0 00:18:06.381 10:48:22 -- nvmf/common.sh@477 -- # '[' -n 3456064 ']' 00:18:06.381 10:48:22 -- nvmf/common.sh@478 -- # killprocess 3456064 00:18:06.381 10:48:22 -- common/autotest_common.sh@926 -- # '[' -z 3456064 ']' 00:18:06.381 10:48:22 -- common/autotest_common.sh@930 -- # kill -0 3456064 00:18:06.381 10:48:22 -- common/autotest_common.sh@931 -- # uname 00:18:06.381 10:48:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:06.381 10:48:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3456064 00:18:06.381 10:48:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:06.381 10:48:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:06.381 10:48:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3456064' 00:18:06.381 killing process with pid 3456064 00:18:06.381 10:48:23 -- common/autotest_common.sh@945 -- # kill 3456064 00:18:06.381 10:48:23 -- common/autotest_common.sh@950 -- # wait 3456064 00:18:06.660 10:48:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:06.660 10:48:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:06.660 10:48:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:06.660 10:48:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.660 10:48:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:06.660 10:48:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.660 10:48:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.660 10:48:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.560 10:48:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:08.560 00:18:08.560 real 0m7.200s 00:18:08.560 user 0m15.238s 00:18:08.560 sys 0m3.606s 00:18:08.560 10:48:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.560 10:48:25 -- common/autotest_common.sh@10 -- # set +x 00:18:08.560 ************************************ 00:18:08.560 END TEST nvmf_bdev_io_wait 00:18:08.560 ************************************ 00:18:08.560 10:48:25 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:08.560 10:48:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:08.560 10:48:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:08.560 10:48:25 -- common/autotest_common.sh@10 -- # set +x 00:18:08.560 ************************************ 00:18:08.560 START TEST nvmf_queue_depth 00:18:08.560 ************************************ 00:18:08.560 10:48:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:08.560 * Looking for test storage... 00:18:08.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.560 10:48:25 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.560 10:48:25 -- nvmf/common.sh@7 -- # uname -s 00:18:08.560 10:48:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.560 10:48:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.560 10:48:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.560 10:48:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.560 10:48:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.560 10:48:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.560 10:48:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.560 10:48:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.560 10:48:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.560 10:48:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.560 10:48:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.560 10:48:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.560 10:48:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.560 10:48:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.560 10:48:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.560 10:48:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.560 10:48:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.560 10:48:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.560 10:48:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.560 10:48:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.560 10:48:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.560 10:48:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.560 10:48:25 -- paths/export.sh@5 -- # export PATH 00:18:08.560 10:48:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.560 10:48:25 -- nvmf/common.sh@46 -- # : 0 00:18:08.560 10:48:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:08.560 10:48:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:08.560 10:48:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:08.560 10:48:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.560 10:48:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.560 10:48:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:08.560 10:48:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:08.560 10:48:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:08.560 10:48:25 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:08.560 10:48:25 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:08.560 10:48:25 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.560 10:48:25 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:08.560 10:48:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:08.560 10:48:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.560 10:48:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:08.560 10:48:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:08.560 10:48:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:08.560 10:48:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.560 10:48:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.560 10:48:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.560 10:48:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:08.560 10:48:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:08.560 10:48:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:08.560 10:48:25 -- common/autotest_common.sh@10 -- # set +x 00:18:10.460 10:48:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:10.460 10:48:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:10.460 10:48:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:10.460 10:48:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:10.460 10:48:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:10.460 10:48:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:10.460 10:48:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:10.460 10:48:27 -- nvmf/common.sh@294 -- # net_devs=() 00:18:10.460 10:48:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:10.460 10:48:27 -- nvmf/common.sh@295 -- # e810=() 00:18:10.460 10:48:27 -- nvmf/common.sh@295 -- # local -ga e810 00:18:10.460 10:48:27 -- nvmf/common.sh@296 -- # x722=() 00:18:10.460 10:48:27 -- nvmf/common.sh@296 -- # local -ga x722 00:18:10.460 10:48:27 -- nvmf/common.sh@297 -- # mlx=() 00:18:10.460 10:48:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:10.460 10:48:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.460 10:48:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.460 10:48:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.460 10:48:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.460 10:48:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.460 10:48:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.460 10:48:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.460 10:48:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.460 10:48:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.460 10:48:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.460 10:48:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.461 10:48:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:10.461 10:48:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:10.461 10:48:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:10.461 10:48:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:10.461 10:48:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:10.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:10.461 10:48:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:10.461 10:48:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:10.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:10.461 10:48:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:10.461 10:48:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:10.461 10:48:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.461 10:48:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:10.461 10:48:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.461 10:48:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:10.461 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:10.461 10:48:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.461 10:48:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:10.461 10:48:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.461 10:48:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:10.461 10:48:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.461 10:48:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:10.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:10.461 10:48:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.461 10:48:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:10.461 10:48:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:10.461 10:48:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:10.461 10:48:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:10.461 10:48:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.461 10:48:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.461 10:48:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.461 10:48:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:10.461 10:48:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.461 10:48:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.461 10:48:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:10.461 10:48:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.461 10:48:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.461 10:48:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:10.461 10:48:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:10.461 10:48:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.461 10:48:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.719 10:48:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.719 10:48:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.719 10:48:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:10.719 10:48:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.719 10:48:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.719 10:48:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.719 10:48:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:10.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:18:10.719 00:18:10.719 --- 10.0.0.2 ping statistics --- 00:18:10.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.719 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:18:10.719 10:48:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:18:10.719 00:18:10.719 --- 10.0.0.1 ping statistics --- 00:18:10.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.719 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:18:10.719 10:48:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.719 10:48:27 -- nvmf/common.sh@410 -- # return 0 00:18:10.719 10:48:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:10.719 10:48:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.719 10:48:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:10.719 10:48:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:10.719 10:48:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.719 10:48:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:10.719 10:48:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:10.719 10:48:27 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:10.719 10:48:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:10.719 10:48:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:10.719 10:48:27 -- common/autotest_common.sh@10 -- # set +x 00:18:10.719 10:48:27 -- nvmf/common.sh@469 -- # nvmfpid=3458345 00:18:10.719 10:48:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.719 10:48:27 -- nvmf/common.sh@470 -- # waitforlisten 3458345 00:18:10.719 10:48:27 -- common/autotest_common.sh@819 -- # '[' -z 3458345 ']' 00:18:10.719 10:48:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.719 10:48:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:10.719 10:48:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.719 10:48:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:10.719 10:48:27 -- common/autotest_common.sh@10 -- # set +x 00:18:10.719 [2024-07-10 10:48:27.439283] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:10.719 [2024-07-10 10:48:27.439369] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.719 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.719 [2024-07-10 10:48:27.507479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.977 [2024-07-10 10:48:27.596315] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:10.977 [2024-07-10 10:48:27.596491] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.977 [2024-07-10 10:48:27.596512] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.977 [2024-07-10 10:48:27.596526] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.977 [2024-07-10 10:48:27.596557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.907 10:48:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:11.907 10:48:28 -- common/autotest_common.sh@852 -- # return 0 00:18:11.907 10:48:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:11.907 10:48:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:11.907 10:48:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 10:48:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.907 10:48:28 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.907 10:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.907 10:48:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 [2024-07-10 10:48:28.395191] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.907 10:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.907 10:48:28 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:11.907 10:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.907 10:48:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 Malloc0 00:18:11.907 10:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.907 10:48:28 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:11.907 10:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.907 10:48:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 10:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.907 10:48:28 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.907 10:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.907 10:48:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 10:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.907 10:48:28 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.907 10:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.907 10:48:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 [2024-07-10 10:48:28.453250] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.907 10:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.907 10:48:28 -- target/queue_depth.sh@30 -- # bdevperf_pid=3458501 00:18:11.907 10:48:28 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.907 10:48:28 -- target/queue_depth.sh@33 -- # waitforlisten 3458501 /var/tmp/bdevperf.sock 00:18:11.907 10:48:28 -- common/autotest_common.sh@819 -- # '[' -z 3458501 ']' 00:18:11.907 10:48:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.907 10:48:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:11.907 10:48:28 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:11.907 10:48:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.907 10:48:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:11.907 10:48:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 [2024-07-10 10:48:28.498414] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:11.907 [2024-07-10 10:48:28.498500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458501 ] 00:18:11.907 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.907 [2024-07-10 10:48:28.556497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.907 [2024-07-10 10:48:28.639837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.840 10:48:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:12.840 10:48:29 -- common/autotest_common.sh@852 -- # return 0 00:18:12.840 10:48:29 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:12.840 10:48:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.840 10:48:29 -- common/autotest_common.sh@10 -- # set +x 00:18:12.840 NVMe0n1 00:18:12.840 10:48:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.840 10:48:29 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:13.097 Running I/O for 10 seconds... 00:18:23.060 00:18:23.060 Latency(us) 00:18:23.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.060 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:23.060 Verification LBA range: start 0x0 length 0x4000 00:18:23.060 NVMe0n1 : 10.07 12268.78 47.92 0.00 0.00 83137.08 14854.83 62137.84 00:18:23.060 =================================================================================================================== 00:18:23.060 Total : 12268.78 47.92 0.00 0.00 83137.08 14854.83 62137.84 00:18:23.060 0 00:18:23.060 10:48:39 -- target/queue_depth.sh@39 -- # killprocess 3458501 00:18:23.060 10:48:39 -- common/autotest_common.sh@926 -- # '[' -z 3458501 ']' 00:18:23.060 10:48:39 -- common/autotest_common.sh@930 -- # kill -0 3458501 00:18:23.060 10:48:39 -- common/autotest_common.sh@931 -- # uname 00:18:23.060 10:48:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:23.060 10:48:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3458501 00:18:23.060 10:48:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:23.060 10:48:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:23.060 10:48:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3458501' 00:18:23.060 killing process with pid 3458501 00:18:23.060 10:48:39 -- common/autotest_common.sh@945 -- # kill 3458501 00:18:23.060 Received shutdown signal, test time was about 10.000000 seconds 00:18:23.060 00:18:23.060 Latency(us) 00:18:23.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.060 =================================================================================================================== 00:18:23.060 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.060 10:48:39 -- common/autotest_common.sh@950 -- # wait 3458501 00:18:23.317 10:48:40 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:23.317 10:48:40 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:23.317 10:48:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:23.317 10:48:40 -- nvmf/common.sh@116 -- # sync 00:18:23.317 10:48:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:23.317 10:48:40 -- nvmf/common.sh@119 -- # set +e 00:18:23.317 10:48:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:23.317 10:48:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:23.317 rmmod nvme_tcp 00:18:23.317 rmmod nvme_fabrics 00:18:23.317 rmmod nvme_keyring 00:18:23.317 10:48:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:23.317 10:48:40 -- nvmf/common.sh@123 -- # set -e 00:18:23.317 10:48:40 -- nvmf/common.sh@124 -- # return 0 00:18:23.317 10:48:40 -- nvmf/common.sh@477 -- # '[' -n 3458345 ']' 00:18:23.317 10:48:40 -- nvmf/common.sh@478 -- # killprocess 3458345 00:18:23.317 10:48:40 -- common/autotest_common.sh@926 -- # '[' -z 3458345 ']' 00:18:23.317 10:48:40 -- common/autotest_common.sh@930 -- # kill -0 3458345 00:18:23.317 10:48:40 -- common/autotest_common.sh@931 -- # uname 00:18:23.317 10:48:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:23.317 10:48:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3458345 00:18:23.575 10:48:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:23.575 10:48:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:23.575 10:48:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3458345' 00:18:23.575 killing process with pid 3458345 00:18:23.575 10:48:40 -- common/autotest_common.sh@945 -- # kill 3458345 00:18:23.575 10:48:40 -- common/autotest_common.sh@950 -- # wait 3458345 00:18:23.832 10:48:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:23.832 10:48:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:23.832 10:48:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:23.832 10:48:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.832 10:48:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:23.832 10:48:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.832 10:48:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.832 10:48:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.730 10:48:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:25.730 00:18:25.730 real 0m17.161s 00:18:25.730 user 0m24.784s 00:18:25.730 sys 0m3.014s 00:18:25.730 10:48:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.730 10:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:25.730 ************************************ 00:18:25.730 END TEST nvmf_queue_depth 00:18:25.730 ************************************ 00:18:25.730 10:48:42 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:25.730 10:48:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:25.730 10:48:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:25.730 10:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:25.730 ************************************ 00:18:25.730 START TEST nvmf_multipath 00:18:25.730 ************************************ 00:18:25.730 10:48:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:25.730 * Looking for test storage... 00:18:25.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.730 10:48:42 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.730 10:48:42 -- nvmf/common.sh@7 -- # uname -s 00:18:25.730 10:48:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.730 10:48:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.730 10:48:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.730 10:48:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.730 10:48:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.730 10:48:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.730 10:48:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.730 10:48:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.730 10:48:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.730 10:48:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.730 10:48:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.730 10:48:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.730 10:48:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.730 10:48:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.730 10:48:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.730 10:48:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.730 10:48:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.730 10:48:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.730 10:48:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.730 10:48:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.730 10:48:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.730 10:48:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.730 10:48:42 -- paths/export.sh@5 -- # export PATH 00:18:25.730 10:48:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.730 10:48:42 -- nvmf/common.sh@46 -- # : 0 00:18:25.730 10:48:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:25.730 10:48:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:25.730 10:48:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:25.730 10:48:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.730 10:48:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.730 10:48:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:25.730 10:48:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:25.730 10:48:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:25.730 10:48:42 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.730 10:48:42 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.988 10:48:42 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:25.988 10:48:42 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.988 10:48:42 -- target/multipath.sh@43 -- # nvmftestinit 00:18:25.988 10:48:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:25.988 10:48:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.988 10:48:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:25.988 10:48:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:25.988 10:48:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:25.988 10:48:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.988 10:48:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.988 10:48:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.988 10:48:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:25.988 10:48:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:25.988 10:48:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:25.988 10:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:27.887 10:48:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:27.887 10:48:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:27.887 10:48:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:27.887 10:48:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:27.887 10:48:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:27.887 10:48:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:27.887 10:48:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:27.887 10:48:44 -- nvmf/common.sh@294 -- # net_devs=() 00:18:27.887 10:48:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:27.887 10:48:44 -- nvmf/common.sh@295 -- # e810=() 00:18:27.887 10:48:44 -- nvmf/common.sh@295 -- # local -ga e810 00:18:27.887 10:48:44 -- nvmf/common.sh@296 -- # x722=() 00:18:27.887 10:48:44 -- nvmf/common.sh@296 -- # local -ga x722 00:18:27.887 10:48:44 -- nvmf/common.sh@297 -- # mlx=() 00:18:27.887 10:48:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:27.887 10:48:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.887 10:48:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.887 10:48:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.887 10:48:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.887 10:48:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.887 10:48:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.887 10:48:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.887 10:48:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.887 10:48:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.887 10:48:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.887 10:48:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.887 10:48:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:27.887 10:48:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:27.887 10:48:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:27.887 10:48:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.887 10:48:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:27.887 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:27.887 10:48:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.887 10:48:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:27.887 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:27.887 10:48:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:27.887 10:48:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.887 10:48:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.887 10:48:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.887 10:48:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.887 10:48:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:27.887 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:27.887 10:48:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.887 10:48:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.887 10:48:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.887 10:48:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.887 10:48:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.887 10:48:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:27.887 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:27.887 10:48:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.887 10:48:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:27.887 10:48:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:27.887 10:48:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:27.887 10:48:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:27.887 10:48:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.887 10:48:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.887 10:48:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.887 10:48:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:27.887 10:48:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.887 10:48:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.887 10:48:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:27.887 10:48:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.887 10:48:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.887 10:48:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:27.887 10:48:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:27.887 10:48:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.887 10:48:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.887 10:48:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.887 10:48:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.887 10:48:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:27.887 10:48:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.887 10:48:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.144 10:48:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.144 10:48:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:28.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:18:28.144 00:18:28.144 --- 10.0.0.2 ping statistics --- 00:18:28.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.144 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:28.144 10:48:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:18:28.144 00:18:28.144 --- 10.0.0.1 ping statistics --- 00:18:28.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.144 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:18:28.144 10:48:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.144 10:48:44 -- nvmf/common.sh@410 -- # return 0 00:18:28.144 10:48:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:28.144 10:48:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.144 10:48:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:28.144 10:48:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:28.144 10:48:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.144 10:48:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:28.144 10:48:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:28.144 10:48:44 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:28.144 10:48:44 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:28.144 only one NIC for nvmf test 00:18:28.144 10:48:44 -- target/multipath.sh@47 -- # nvmftestfini 00:18:28.144 10:48:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:28.144 10:48:44 -- nvmf/common.sh@116 -- # sync 00:18:28.144 10:48:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:28.144 10:48:44 -- nvmf/common.sh@119 -- # set +e 00:18:28.144 10:48:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:28.144 10:48:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:28.144 rmmod nvme_tcp 00:18:28.144 rmmod nvme_fabrics 00:18:28.144 rmmod nvme_keyring 00:18:28.144 10:48:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:28.144 10:48:44 -- nvmf/common.sh@123 -- # set -e 00:18:28.144 10:48:44 -- nvmf/common.sh@124 -- # return 0 00:18:28.144 10:48:44 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:28.144 10:48:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:28.144 10:48:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:28.144 10:48:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:28.144 10:48:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.144 10:48:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:28.144 10:48:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.144 10:48:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.144 10:48:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.045 10:48:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:30.045 10:48:46 -- target/multipath.sh@48 -- # exit 0 00:18:30.045 10:48:46 -- target/multipath.sh@1 -- # nvmftestfini 00:18:30.045 10:48:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:30.045 10:48:46 -- nvmf/common.sh@116 -- # sync 00:18:30.045 10:48:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:30.045 10:48:46 -- nvmf/common.sh@119 -- # set +e 00:18:30.045 10:48:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:30.045 10:48:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:30.045 10:48:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:30.045 10:48:46 -- nvmf/common.sh@123 -- # set -e 00:18:30.045 10:48:46 -- nvmf/common.sh@124 -- # return 0 00:18:30.045 10:48:46 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:30.045 10:48:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:30.045 10:48:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:30.045 10:48:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:30.045 10:48:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.045 10:48:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:30.045 10:48:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.045 10:48:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.045 10:48:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.303 10:48:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:30.303 00:18:30.303 real 0m4.385s 00:18:30.303 user 0m0.823s 00:18:30.303 sys 0m1.553s 00:18:30.303 10:48:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.303 10:48:46 -- common/autotest_common.sh@10 -- # set +x 00:18:30.303 ************************************ 00:18:30.303 END TEST nvmf_multipath 00:18:30.303 ************************************ 00:18:30.303 10:48:46 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:30.304 10:48:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:30.304 10:48:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:30.304 10:48:46 -- common/autotest_common.sh@10 -- # set +x 00:18:30.304 ************************************ 00:18:30.304 START TEST nvmf_zcopy 00:18:30.304 ************************************ 00:18:30.304 10:48:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:30.304 * Looking for test storage... 00:18:30.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.304 10:48:46 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.304 10:48:46 -- nvmf/common.sh@7 -- # uname -s 00:18:30.304 10:48:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.304 10:48:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.304 10:48:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.304 10:48:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.304 10:48:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.304 10:48:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.304 10:48:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.304 10:48:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.304 10:48:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.304 10:48:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.304 10:48:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.304 10:48:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.304 10:48:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.304 10:48:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.304 10:48:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.304 10:48:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.304 10:48:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.304 10:48:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.304 10:48:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.304 10:48:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.304 10:48:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.304 10:48:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.304 10:48:46 -- paths/export.sh@5 -- # export PATH 00:18:30.304 10:48:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.304 10:48:46 -- nvmf/common.sh@46 -- # : 0 00:18:30.304 10:48:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:30.304 10:48:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:30.304 10:48:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:30.304 10:48:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.304 10:48:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.304 10:48:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:30.304 10:48:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:30.304 10:48:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:30.304 10:48:46 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:30.304 10:48:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:30.304 10:48:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.304 10:48:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:30.304 10:48:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:30.304 10:48:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:30.304 10:48:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.304 10:48:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.304 10:48:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.304 10:48:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:30.304 10:48:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:30.304 10:48:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:30.304 10:48:46 -- common/autotest_common.sh@10 -- # set +x 00:18:32.206 10:48:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:32.206 10:48:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:32.206 10:48:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:32.206 10:48:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:32.206 10:48:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:32.206 10:48:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:32.206 10:48:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:32.206 10:48:48 -- nvmf/common.sh@294 -- # net_devs=() 00:18:32.206 10:48:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:32.206 10:48:48 -- nvmf/common.sh@295 -- # e810=() 00:18:32.206 10:48:48 -- nvmf/common.sh@295 -- # local -ga e810 00:18:32.206 10:48:48 -- nvmf/common.sh@296 -- # x722=() 00:18:32.206 10:48:48 -- nvmf/common.sh@296 -- # local -ga x722 00:18:32.206 10:48:48 -- nvmf/common.sh@297 -- # mlx=() 00:18:32.206 10:48:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:32.206 10:48:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.206 10:48:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.206 10:48:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.206 10:48:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.206 10:48:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.206 10:48:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.206 10:48:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.206 10:48:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.206 10:48:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.206 10:48:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.206 10:48:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.206 10:48:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:32.206 10:48:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:32.206 10:48:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:32.206 10:48:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:32.206 10:48:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:32.206 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:32.206 10:48:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:32.206 10:48:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:32.206 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:32.206 10:48:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:32.206 10:48:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:32.206 10:48:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.206 10:48:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:32.206 10:48:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.206 10:48:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:32.206 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:32.206 10:48:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.206 10:48:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:32.206 10:48:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.206 10:48:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:32.206 10:48:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.206 10:48:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:32.206 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:32.206 10:48:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.206 10:48:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:32.206 10:48:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:32.206 10:48:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:32.206 10:48:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:32.206 10:48:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.206 10:48:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.206 10:48:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.206 10:48:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:32.206 10:48:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.206 10:48:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.206 10:48:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:32.206 10:48:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.206 10:48:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.206 10:48:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:32.206 10:48:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:32.206 10:48:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.206 10:48:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.465 10:48:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.465 10:48:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.465 10:48:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:32.465 10:48:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.465 10:48:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.465 10:48:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.465 10:48:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:32.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:18:32.465 00:18:32.465 --- 10.0.0.2 ping statistics --- 00:18:32.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.465 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:18:32.465 10:48:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:18:32.465 00:18:32.465 --- 10.0.0.1 ping statistics --- 00:18:32.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.465 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:32.465 10:48:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.465 10:48:49 -- nvmf/common.sh@410 -- # return 0 00:18:32.465 10:48:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:32.465 10:48:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.465 10:48:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:32.465 10:48:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:32.465 10:48:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.465 10:48:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:32.465 10:48:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:32.465 10:48:49 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:32.465 10:48:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:32.465 10:48:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:32.465 10:48:49 -- common/autotest_common.sh@10 -- # set +x 00:18:32.465 10:48:49 -- nvmf/common.sh@469 -- # nvmfpid=3463748 00:18:32.465 10:48:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:32.465 10:48:49 -- nvmf/common.sh@470 -- # waitforlisten 3463748 00:18:32.465 10:48:49 -- common/autotest_common.sh@819 -- # '[' -z 3463748 ']' 00:18:32.465 10:48:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.465 10:48:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:32.465 10:48:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.465 10:48:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:32.465 10:48:49 -- common/autotest_common.sh@10 -- # set +x 00:18:32.465 [2024-07-10 10:48:49.177801] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:32.465 [2024-07-10 10:48:49.177892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.465 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.465 [2024-07-10 10:48:49.243280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.724 [2024-07-10 10:48:49.331212] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:32.724 [2024-07-10 10:48:49.331364] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.724 [2024-07-10 10:48:49.331395] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.724 [2024-07-10 10:48:49.331407] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.724 [2024-07-10 10:48:49.331457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.658 10:48:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:33.658 10:48:50 -- common/autotest_common.sh@852 -- # return 0 00:18:33.658 10:48:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:33.658 10:48:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:33.658 10:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:33.658 10:48:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.658 10:48:50 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:33.658 10:48:50 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:33.658 10:48:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.658 10:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:33.658 [2024-07-10 10:48:50.158064] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.658 10:48:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.658 10:48:50 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:33.658 10:48:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.658 10:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:33.658 10:48:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.658 10:48:50 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.658 10:48:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.658 10:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:33.658 [2024-07-10 10:48:50.174231] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.658 10:48:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.658 10:48:50 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:33.658 10:48:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.658 10:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:33.658 10:48:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.658 10:48:50 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:33.658 10:48:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.658 10:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:33.658 malloc0 00:18:33.658 10:48:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.658 10:48:50 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:33.658 10:48:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.658 10:48:50 -- common/autotest_common.sh@10 -- # set +x 00:18:33.658 10:48:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.658 10:48:50 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:33.658 10:48:50 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:33.658 10:48:50 -- nvmf/common.sh@520 -- # config=() 00:18:33.658 10:48:50 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.658 10:48:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.658 10:48:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.658 { 00:18:33.658 "params": { 00:18:33.658 "name": "Nvme$subsystem", 00:18:33.658 "trtype": "$TEST_TRANSPORT", 00:18:33.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.658 "adrfam": "ipv4", 00:18:33.658 "trsvcid": "$NVMF_PORT", 00:18:33.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.658 "hdgst": ${hdgst:-false}, 00:18:33.658 "ddgst": ${ddgst:-false} 00:18:33.658 }, 00:18:33.658 "method": "bdev_nvme_attach_controller" 00:18:33.658 } 00:18:33.658 EOF 00:18:33.658 )") 00:18:33.658 10:48:50 -- nvmf/common.sh@542 -- # cat 00:18:33.658 10:48:50 -- nvmf/common.sh@544 -- # jq . 00:18:33.658 10:48:50 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.658 10:48:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.658 "params": { 00:18:33.658 "name": "Nvme1", 00:18:33.658 "trtype": "tcp", 00:18:33.658 "traddr": "10.0.0.2", 00:18:33.658 "adrfam": "ipv4", 00:18:33.658 "trsvcid": "4420", 00:18:33.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.659 "hdgst": false, 00:18:33.659 "ddgst": false 00:18:33.659 }, 00:18:33.659 "method": "bdev_nvme_attach_controller" 00:18:33.659 }' 00:18:33.659 [2024-07-10 10:48:50.247543] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:33.659 [2024-07-10 10:48:50.247627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3463904 ] 00:18:33.659 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.659 [2024-07-10 10:48:50.310810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.659 [2024-07-10 10:48:50.402435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.917 Running I/O for 10 seconds... 00:18:43.886 00:18:43.886 Latency(us) 00:18:43.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.886 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:43.886 Verification LBA range: start 0x0 length 0x1000 00:18:43.886 Nvme1n1 : 10.01 8642.84 67.52 0.00 0.00 14774.43 1638.40 24078.41 00:18:43.886 =================================================================================================================== 00:18:43.886 Total : 8642.84 67.52 0.00 0.00 14774.43 1638.40 24078.41 00:18:44.144 10:49:00 -- target/zcopy.sh@39 -- # perfpid=3465245 00:18:44.144 10:49:00 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:44.144 10:49:00 -- common/autotest_common.sh@10 -- # set +x 00:18:44.144 10:49:00 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:44.144 10:49:00 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:44.144 10:49:00 -- nvmf/common.sh@520 -- # config=() 00:18:44.144 10:49:00 -- nvmf/common.sh@520 -- # local subsystem config 00:18:44.144 10:49:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:44.144 10:49:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:44.144 { 00:18:44.144 "params": { 00:18:44.144 "name": "Nvme$subsystem", 00:18:44.144 "trtype": "$TEST_TRANSPORT", 00:18:44.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.144 "adrfam": "ipv4", 00:18:44.144 "trsvcid": "$NVMF_PORT", 00:18:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.144 "hdgst": ${hdgst:-false}, 00:18:44.144 "ddgst": ${ddgst:-false} 00:18:44.144 }, 00:18:44.144 "method": "bdev_nvme_attach_controller" 00:18:44.144 } 00:18:44.144 EOF 00:18:44.144 )") 00:18:44.144 10:49:00 -- nvmf/common.sh@542 -- # cat 00:18:44.144 [2024-07-10 10:49:00.859056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.859102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 10:49:00 -- nvmf/common.sh@544 -- # jq . 00:18:44.144 10:49:00 -- nvmf/common.sh@545 -- # IFS=, 00:18:44.144 10:49:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:44.144 "params": { 00:18:44.144 "name": "Nvme1", 00:18:44.144 "trtype": "tcp", 00:18:44.144 "traddr": "10.0.0.2", 00:18:44.144 "adrfam": "ipv4", 00:18:44.144 "trsvcid": "4420", 00:18:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.144 "hdgst": false, 00:18:44.144 "ddgst": false 00:18:44.144 }, 00:18:44.144 "method": "bdev_nvme_attach_controller" 00:18:44.144 }' 00:18:44.144 [2024-07-10 10:49:00.867025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.867059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-10 10:49:00.875045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.875070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-10 10:49:00.883053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.883074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-10 10:49:00.891072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.891092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-10 10:49:00.893185] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:44.144 [2024-07-10 10:49:00.893252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3465245 ] 00:18:44.144 [2024-07-10 10:49:00.899090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.899110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-10 10:49:00.907115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.907135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-10 10:49:00.915134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.915154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.144 [2024-07-10 10:49:00.923154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.923172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-10 10:49:00.931193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.931218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-10 10:49:00.939216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.939240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-10 10:49:00.947238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-10 10:49:00.947262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.145 [2024-07-10 10:49:00.955259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.145 [2024-07-10 10:49:00.955282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.145 [2024-07-10 10:49:00.955907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.145 [2024-07-10 10:49:00.963306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.145 [2024-07-10 10:49:00.963338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.403 [2024-07-10 10:49:00.971335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.403 [2024-07-10 10:49:00.971372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.403 [2024-07-10 10:49:00.979327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.403 [2024-07-10 10:49:00.979352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.403 [2024-07-10 10:49:00.987350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.403 [2024-07-10 10:49:00.987374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.403 [2024-07-10 10:49:00.995378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.403 [2024-07-10 10:49:00.995404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.403 [2024-07-10 10:49:01.003397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.403 [2024-07-10 10:49:01.003422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.403 [2024-07-10 10:49:01.011450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.403 [2024-07-10 10:49:01.011497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.403 [2024-07-10 10:49:01.019480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.403 [2024-07-10 10:49:01.019508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.027483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.027504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.035500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.035521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.043515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.043536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.050049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.404 [2024-07-10 10:49:01.051521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.051551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.059549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.059570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.067590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.067620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.075621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.075654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.083643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.083677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.091666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.091718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.099690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.099748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.107722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.107764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.115717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.115742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.123771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.123826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.131803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.131844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.139805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.139835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.147815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.147847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.155887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.155916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.163906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.163933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.171928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.171955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.179954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.179982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.187973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.188000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.195997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.196024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.204017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.204044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-10 10:49:01.212041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-10 10:49:01.212066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.255689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.255716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.260190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.260218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 Running I/O for 5 seconds... 00:18:44.663 [2024-07-10 10:49:01.268211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.268237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.280551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.280579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.290842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.290873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.302377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.302408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.313534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.313562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.324665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.324693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.335694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.335738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.348785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.348816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.359039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.359070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.370234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.370264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.381362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.381402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.392343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.392373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.405592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.405620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.415801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.415832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.426495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.426523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.437564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.437592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.448804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.448835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.459857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.459888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.470909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.470939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.663 [2024-07-10 10:49:01.483895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.663 [2024-07-10 10:49:01.483925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.493774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.493804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.505341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.505371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.516188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.516218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.528954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.528984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.538490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.538517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.549692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.549738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.562915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.562946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.573476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.573504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.584876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.584906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.596170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.596200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.607996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.608027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.619065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.619096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.632102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.632133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.642493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.642520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.653338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.653369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.663888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.663918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.674951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.674982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.686034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.686065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.696908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.696939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.707725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.707756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.718965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.718996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.729898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.729932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.923 [2024-07-10 10:49:01.740871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.923 [2024-07-10 10:49:01.740901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.181 [2024-07-10 10:49:01.751960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.181 [2024-07-10 10:49:01.751990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.181 [2024-07-10 10:49:01.765230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.181 [2024-07-10 10:49:01.765260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.181 [2024-07-10 10:49:01.775826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.181 [2024-07-10 10:49:01.775856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.181 [2024-07-10 10:49:01.786629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.181 [2024-07-10 10:49:01.786657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.181 [2024-07-10 10:49:01.798963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.181 [2024-07-10 10:49:01.798994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.181 [2024-07-10 10:49:01.809056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.181 [2024-07-10 10:49:01.809087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.181 [2024-07-10 10:49:01.820404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.820475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.831338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.831369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.842141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.842172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.855121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.855152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.865187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.865217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.876258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.876288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.886902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.886933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.898073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.898104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.911056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.911088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.920247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.920278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.931506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.931534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.942252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.942282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.953548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.953576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.964455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.964505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.977097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.977127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.986677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.986710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.182 [2024-07-10 10:49:01.997841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.182 [2024-07-10 10:49:01.997869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.010170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.010200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.019916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.019946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.031022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.031052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.043918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.043949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.053951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.053981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.065901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.065932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.076513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.076541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.087993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.088023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.098955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.098985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.109836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.109866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.121012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.121042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.134110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.134140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.144461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.144504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.440 [2024-07-10 10:49:02.156073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.440 [2024-07-10 10:49:02.156103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.441 [2024-07-10 10:49:02.167069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.441 [2024-07-10 10:49:02.167099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.441 [2024-07-10 10:49:02.180292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.441 [2024-07-10 10:49:02.180323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.441 [2024-07-10 10:49:02.190307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.441 [2024-07-10 10:49:02.190339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.441 [2024-07-10 10:49:02.201326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.441 [2024-07-10 10:49:02.201364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.441 [2024-07-10 10:49:02.212337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.441 [2024-07-10 10:49:02.212368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.441 [2024-07-10 10:49:02.223268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.441 [2024-07-10 10:49:02.223298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.441 [2024-07-10 10:49:02.236020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.441 [2024-07-10 10:49:02.236051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.441 [2024-07-10 10:49:02.245714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.441 [2024-07-10 10:49:02.245745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.441 [2024-07-10 10:49:02.257554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.441 [2024-07-10 10:49:02.257582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.268530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.268557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.279540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.279567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.292551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.292579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.302371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.302401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.313864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.313894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.324661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.324688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.335438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.335482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.348089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.348119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.358077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.358107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.368937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.368968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.379898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.379929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.391232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.391262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.401557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.401584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.412595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.412629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.425294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.425324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.435579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.435606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.446464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.446507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.459341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.459371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.469912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.469943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.480943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.480973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.493836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.493866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.503859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.503890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.699 [2024-07-10 10:49:02.514869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.699 [2024-07-10 10:49:02.514900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.526056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.526086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.536833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.536863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.547785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.547815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.560408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.560449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.570058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.570088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.581302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.581332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.592302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.592332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.603406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.603445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.614491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.614519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.625292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.625331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.637809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.637840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.647904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.647933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.658599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.658626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.671629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.671656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.681620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.681647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.692507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.692535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.703327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.703357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.714274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.714305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.727259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.727289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.737774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.737804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.748335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.748365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.958 [2024-07-10 10:49:02.758978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.958 [2024-07-10 10:49:02.759009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.959 [2024-07-10 10:49:02.769736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.959 [2024-07-10 10:49:02.769767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.959 [2024-07-10 10:49:02.780796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.959 [2024-07-10 10:49:02.780826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.791922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.791953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.802660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.802687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.813644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.813671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.826595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.826622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.836416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.836454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.848049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.848079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.859107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.859137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.869829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.869861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.881346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.881376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.892502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.892530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.903385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.903421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.914480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.914507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.925281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.925311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.937781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.937812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.947416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.947458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.958883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.958915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.970086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.970118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.980605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.980632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:02.993314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:02.993340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:03.003791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:03.003821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:03.015015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:03.015057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:03.028493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:03.028520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.217 [2024-07-10 10:49:03.039017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.217 [2024-07-10 10:49:03.039048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.050085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.050115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.061176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.061207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.072892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.072922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.084087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.084117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.097124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.097155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.107850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.107880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.119043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.119073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.131936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.131966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.141948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.141978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.153197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.153227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.164524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.164551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.177727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.177757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.187767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.187798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.198784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.198816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.211534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.211561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.221601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.221629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.233285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.233315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.244255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.244285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.255117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.255148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.268629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.268656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.278942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.278972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.475 [2024-07-10 10:49:03.290013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.475 [2024-07-10 10:49:03.290043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.302639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.302666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.312553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.312580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.323757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.323788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.336357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.336387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.346571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.346599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.357642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.357669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.368083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.368113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.378975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.379004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.392251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.392280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.402460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.402501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.413297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.413327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.424040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.424071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.733 [2024-07-10 10:49:03.434543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.733 [2024-07-10 10:49:03.434571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.734 [2024-07-10 10:49:03.444819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.734 [2024-07-10 10:49:03.444849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.734 [2024-07-10 10:49:03.455999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.734 [2024-07-10 10:49:03.456029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.734 [2024-07-10 10:49:03.467051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.734 [2024-07-10 10:49:03.467082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.734 [2024-07-10 10:49:03.477601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.734 [2024-07-10 10:49:03.477629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.734 [2024-07-10 10:49:03.488193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.734 [2024-07-10 10:49:03.488223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.734 [2024-07-10 10:49:03.499099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.734 [2024-07-10 10:49:03.499129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.734 [2024-07-10 10:49:03.512044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.734 [2024-07-10 10:49:03.512074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.734 [2024-07-10 10:49:03.521351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.734 [2024-07-10 10:49:03.521382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.734 [2024-07-10 10:49:03.532903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.734 [2024-07-10 10:49:03.532933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.734 [2024-07-10 10:49:03.543413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.734 [2024-07-10 10:49:03.543451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.734 [2024-07-10 10:49:03.553888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.734 [2024-07-10 10:49:03.553918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.564514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.564541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.575072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.575102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.586098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.586129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.596978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.597008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.610162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.610192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.620605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.620633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.631514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.631542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.644376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.644407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.654151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.654181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.665034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.665064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.678192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.678230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.688356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.688386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.698765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.698795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.709378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.709408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.720460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.720502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.733676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.733720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.743624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.743651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.754554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.754581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.765586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.765614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.776664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.776691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.787247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.787277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.798405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.798446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.993 [2024-07-10 10:49:03.811600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.993 [2024-07-10 10:49:03.811628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.821872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.821903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.833235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.833266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.844338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.844367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.855400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.855442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.866919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.866949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.877625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.877652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.888400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.888448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.899173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.899203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.910051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.910082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.922875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.922905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.932796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.932827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.943746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.252 [2024-07-10 10:49:03.943777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.252 [2024-07-10 10:49:03.957259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.253 [2024-07-10 10:49:03.957289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.253 [2024-07-10 10:49:03.967922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.253 [2024-07-10 10:49:03.967952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.253 [2024-07-10 10:49:03.979040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.253 [2024-07-10 10:49:03.979070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.253 [2024-07-10 10:49:03.989965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.253 [2024-07-10 10:49:03.989995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.253 [2024-07-10 10:49:04.001518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.253 [2024-07-10 10:49:04.001545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.253 [2024-07-10 10:49:04.012455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.253 [2024-07-10 10:49:04.012499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.253 [2024-07-10 10:49:04.023339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.253 [2024-07-10 10:49:04.023370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.253 [2024-07-10 10:49:04.034083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.253 [2024-07-10 10:49:04.034114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.253 [2024-07-10 10:49:04.044790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.253 [2024-07-10 10:49:04.044820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.253 [2024-07-10 10:49:04.055736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.253 [2024-07-10 10:49:04.055766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.253 [2024-07-10 10:49:04.068744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.253 [2024-07-10 10:49:04.068774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.079138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.079169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.090155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.090197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.102927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.102965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.113199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.113230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.124266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.124294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.137030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.137058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.146977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.147004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.158018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.158057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.169616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.169643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.180670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.180697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.191825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.191856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.202797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.202827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.215800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.215835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.225342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.225372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.236897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.236928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.247740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.247771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.258277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.258307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.269061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.269091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.280274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.280304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.291318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.291348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.302411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.302448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.313392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.313438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.511 [2024-07-10 10:49:04.324302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.511 [2024-07-10 10:49:04.324332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.335228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.335259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.346210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.346240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.357690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.357735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.368749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.368779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.381582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.381609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.391357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.391387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.402594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.402621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.415326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.415356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.425143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.425173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.436732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.436770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.447565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.447593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.460529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.460556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.470933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.470963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.482685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.482728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.493851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.493881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.504660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.504687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.515931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.515960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.526930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.526960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.538088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.538119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.551259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.551289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.561987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.562017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.572766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.572797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.770 [2024-07-10 10:49:04.586123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.770 [2024-07-10 10:49:04.586152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.028 [2024-07-10 10:49:04.596517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.028 [2024-07-10 10:49:04.596544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.028 [2024-07-10 10:49:04.607555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.028 [2024-07-10 10:49:04.607582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.028 [2024-07-10 10:49:04.620260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.028 [2024-07-10 10:49:04.620291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.028 [2024-07-10 10:49:04.630073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.028 [2024-07-10 10:49:04.630103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.028 [2024-07-10 10:49:04.640904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.028 [2024-07-10 10:49:04.640935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.028 [2024-07-10 10:49:04.651926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.028 [2024-07-10 10:49:04.651956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.028 [2024-07-10 10:49:04.663109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.028 [2024-07-10 10:49:04.663139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.028 [2024-07-10 10:49:04.676422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.028 [2024-07-10 10:49:04.676477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.028 [2024-07-10 10:49:04.687165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.028 [2024-07-10 10:49:04.687195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.028 [2024-07-10 10:49:04.698161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.028 [2024-07-10 10:49:04.698191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.028 [2024-07-10 10:49:04.709258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.028 [2024-07-10 10:49:04.709288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.720228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.720258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.732924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.732955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.742999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.743030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.754682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.754725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.765850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.765880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.777047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.777078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.787875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.787906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.800775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.800805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.811000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.811030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.822289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.822319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.833079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.833109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.029 [2024-07-10 10:49:04.843849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.029 [2024-07-10 10:49:04.843880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.854337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.854364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.865972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.866002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.879018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.879047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.889565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.889592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.900382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.900412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.913379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.913410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.923836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.923866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.935094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.935124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.946198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.946228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.958137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.958167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.969274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.969304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.982128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.982158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:04.991845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:04.991875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:05.003474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:05.003501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:05.016450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:05.016495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:05.026722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:05.026764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:05.037734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:05.037778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:05.050656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:05.050684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:05.060593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:05.060634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:05.072146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:05.072176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:05.082410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:05.082448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:05.093200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:05.093229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.287 [2024-07-10 10:49:05.106025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.287 [2024-07-10 10:49:05.106054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.115754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.115780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.127672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.127699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.138377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.138407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.149311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.149341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.161717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.161747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.171550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.171577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.183006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.183037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.194195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.194226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.205333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.205363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.217554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.217582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.228990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.229032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.237792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.237819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.249168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.544 [2024-07-10 10:49:05.249198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.544 [2024-07-10 10:49:05.262475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.545 [2024-07-10 10:49:05.262503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.545 [2024-07-10 10:49:05.272439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.545 [2024-07-10 10:49:05.272492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.545 [2024-07-10 10:49:05.283822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.545 [2024-07-10 10:49:05.283853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.545 [2024-07-10 10:49:05.294982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.545 [2024-07-10 10:49:05.295013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.545 [2024-07-10 10:49:05.305923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.545 [2024-07-10 10:49:05.305954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.545 [2024-07-10 10:49:05.317523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.545 [2024-07-10 10:49:05.317565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.545 [2024-07-10 10:49:05.328640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.545 [2024-07-10 10:49:05.328683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.545 [2024-07-10 10:49:05.339600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.545 [2024-07-10 10:49:05.339628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.545 [2024-07-10 10:49:05.350809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.545 [2024-07-10 10:49:05.350839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.545 [2024-07-10 10:49:05.361674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.545 [2024-07-10 10:49:05.361701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.802 [2024-07-10 10:49:05.372800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.802 [2024-07-10 10:49:05.372858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.802 [2024-07-10 10:49:05.383942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.802 [2024-07-10 10:49:05.383972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.802 [2024-07-10 10:49:05.395311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.802 [2024-07-10 10:49:05.395341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.802 [2024-07-10 10:49:05.406045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.802 [2024-07-10 10:49:05.406075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.802 [2024-07-10 10:49:05.417090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.802 [2024-07-10 10:49:05.417121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.802 [2024-07-10 10:49:05.428466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.428510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.439322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.439352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.452791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.452821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.463258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.463288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.474696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.474739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.485603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.485630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.496697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.496739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.508056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.508085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.518995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.519025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.531891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.531921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.541628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.541655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.553418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.553456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.564515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.564567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.575128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.575159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.586220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.586258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.599038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.599068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.608754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.608785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.803 [2024-07-10 10:49:05.619479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.803 [2024-07-10 10:49:05.619510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.632095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.632126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.641698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.641728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.652791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.652821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.663404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.663443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.674099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.674128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.684796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.684826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.697282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.697312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.707348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.707377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.718651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.718681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.731433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.731463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.741367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.741397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.752642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.752672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.765578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.765609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.775714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.775745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.786611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.786641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.799492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.799529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.809359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.809389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.820057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.820087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.833165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.833195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.842803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.842834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.862136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.862168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.872691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.872721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.062 [2024-07-10 10:49:05.883385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.062 [2024-07-10 10:49:05.883415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:05.894038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:05.894069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:05.905238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:05.905268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:05.916144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:05.916174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:05.927087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:05.927117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:05.939980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:05.940010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:05.950244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:05.950274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:05.961358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:05.961389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:05.972450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:05.972480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:05.983856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:05.983886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:05.994461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:05.994491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.005465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.005495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.016103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.016146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.026768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.026797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.037843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.037873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.049393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.049433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.060255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.060285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.071453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.071483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.082649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.082679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.095780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.095809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.105582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.105612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.116971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.117001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.127770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.127800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.319 [2024-07-10 10:49:06.140144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.319 [2024-07-10 10:49:06.140174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.577 [2024-07-10 10:49:06.150057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.577 [2024-07-10 10:49:06.150086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.577 [2024-07-10 10:49:06.161260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.577 [2024-07-10 10:49:06.161290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.577 [2024-07-10 10:49:06.171722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.577 [2024-07-10 10:49:06.171751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.577 [2024-07-10 10:49:06.182297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.577 [2024-07-10 10:49:06.182327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.577 [2024-07-10 10:49:06.193338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.577 [2024-07-10 10:49:06.193368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.204450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.204490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.217631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.217661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.228108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.228146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.239285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.239317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.250706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.250735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.261710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.261740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.272444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.272480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.283417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.283455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.290853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.290882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 00:18:49.578 Latency(us) 00:18:49.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.578 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:49.578 Nvme1n1 : 5.01 11601.56 90.64 0.00 0.00 11017.17 4708.88 21651.15 00:18:49.578 =================================================================================================================== 00:18:49.578 Total : 11601.56 90.64 0.00 0.00 11017.17 4708.88 21651.15 00:18:49.578 [2024-07-10 10:49:06.298915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.298943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.306934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.306962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.315000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.315046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.323019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.323066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.331035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.331080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.339055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.339102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.347086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.347128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.355114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.355160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.363133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.363180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.371156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.371203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.379181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.379227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.387207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.387256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.578 [2024-07-10 10:49:06.395229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.578 [2024-07-10 10:49:06.395278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.403252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.403296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.411276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.411322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.419287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.419334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.427293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.427334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.435294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.435320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.443335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.443372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.451376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.451423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.459399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.459456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.467385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.467413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.475412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.475448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.483486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.483532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.491494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.491536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.499483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.499510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.507503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.507528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 [2024-07-10 10:49:06.515511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.837 [2024-07-10 10:49:06.515535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3465245) - No such process 00:18:49.837 10:49:06 -- target/zcopy.sh@49 -- # wait 3465245 00:18:49.837 10:49:06 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:49.837 10:49:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.837 10:49:06 -- common/autotest_common.sh@10 -- # set +x 00:18:49.837 10:49:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.837 10:49:06 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:49.837 10:49:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.837 10:49:06 -- common/autotest_common.sh@10 -- # set +x 00:18:49.837 delay0 00:18:49.837 10:49:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.837 10:49:06 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:49.837 10:49:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.837 10:49:06 -- common/autotest_common.sh@10 -- # set +x 00:18:49.837 10:49:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.837 10:49:06 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:49.837 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.094 [2024-07-10 10:49:06.674551] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:56.649 Initializing NVMe Controllers 00:18:56.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:56.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:56.649 Initialization complete. Launching workers. 00:18:56.649 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 925 00:18:56.649 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1207, failed to submit 38 00:18:56.649 success 1028, unsuccess 179, failed 0 00:18:56.649 10:49:12 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:56.649 10:49:12 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:56.649 10:49:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:56.649 10:49:12 -- nvmf/common.sh@116 -- # sync 00:18:56.649 10:49:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:56.649 10:49:12 -- nvmf/common.sh@119 -- # set +e 00:18:56.649 10:49:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:56.649 10:49:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:56.649 rmmod nvme_tcp 00:18:56.649 rmmod nvme_fabrics 00:18:56.649 rmmod nvme_keyring 00:18:56.649 10:49:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:56.649 10:49:12 -- nvmf/common.sh@123 -- # set -e 00:18:56.649 10:49:12 -- nvmf/common.sh@124 -- # return 0 00:18:56.649 10:49:12 -- nvmf/common.sh@477 -- # '[' -n 3463748 ']' 00:18:56.649 10:49:12 -- nvmf/common.sh@478 -- # killprocess 3463748 00:18:56.649 10:49:12 -- common/autotest_common.sh@926 -- # '[' -z 3463748 ']' 00:18:56.649 10:49:12 -- common/autotest_common.sh@930 -- # kill -0 3463748 00:18:56.649 10:49:12 -- common/autotest_common.sh@931 -- # uname 00:18:56.649 10:49:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:56.649 10:49:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3463748 00:18:56.649 10:49:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:56.649 10:49:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:56.649 10:49:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3463748' 00:18:56.649 killing process with pid 3463748 00:18:56.649 10:49:12 -- common/autotest_common.sh@945 -- # kill 3463748 00:18:56.649 10:49:12 -- common/autotest_common.sh@950 -- # wait 3463748 00:18:56.649 10:49:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:56.649 10:49:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:56.649 10:49:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:56.649 10:49:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:56.649 10:49:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:56.649 10:49:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.649 10:49:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.649 10:49:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.611 10:49:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:58.611 00:18:58.611 real 0m28.381s 00:18:58.611 user 0m41.750s 00:18:58.611 sys 0m8.268s 00:18:58.611 10:49:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.611 10:49:15 -- common/autotest_common.sh@10 -- # set +x 00:18:58.611 ************************************ 00:18:58.611 END TEST nvmf_zcopy 00:18:58.611 ************************************ 00:18:58.611 10:49:15 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:58.611 10:49:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:58.611 10:49:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:58.611 10:49:15 -- common/autotest_common.sh@10 -- # set +x 00:18:58.611 ************************************ 00:18:58.611 START TEST nvmf_nmic 00:18:58.611 ************************************ 00:18:58.611 10:49:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:58.611 * Looking for test storage... 00:18:58.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:58.611 10:49:15 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.611 10:49:15 -- nvmf/common.sh@7 -- # uname -s 00:18:58.611 10:49:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.611 10:49:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.611 10:49:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.611 10:49:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.611 10:49:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.611 10:49:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.611 10:49:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.611 10:49:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.611 10:49:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.611 10:49:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.611 10:49:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.611 10:49:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.611 10:49:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.611 10:49:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.611 10:49:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.611 10:49:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.611 10:49:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.611 10:49:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.611 10:49:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.611 10:49:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.611 10:49:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.611 10:49:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.611 10:49:15 -- paths/export.sh@5 -- # export PATH 00:18:58.611 10:49:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.611 10:49:15 -- nvmf/common.sh@46 -- # : 0 00:18:58.611 10:49:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:58.611 10:49:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:58.611 10:49:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:58.611 10:49:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.611 10:49:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.611 10:49:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:58.611 10:49:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:58.611 10:49:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:58.611 10:49:15 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:58.611 10:49:15 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:58.611 10:49:15 -- target/nmic.sh@14 -- # nvmftestinit 00:18:58.611 10:49:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:58.611 10:49:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.611 10:49:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:58.611 10:49:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:58.611 10:49:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:58.611 10:49:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.611 10:49:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.611 10:49:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.611 10:49:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:58.611 10:49:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:58.611 10:49:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:58.611 10:49:15 -- common/autotest_common.sh@10 -- # set +x 00:19:00.539 10:49:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:00.539 10:49:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:00.539 10:49:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:00.539 10:49:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:00.539 10:49:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:00.539 10:49:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:00.539 10:49:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:00.539 10:49:17 -- nvmf/common.sh@294 -- # net_devs=() 00:19:00.539 10:49:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:00.539 10:49:17 -- nvmf/common.sh@295 -- # e810=() 00:19:00.539 10:49:17 -- nvmf/common.sh@295 -- # local -ga e810 00:19:00.539 10:49:17 -- nvmf/common.sh@296 -- # x722=() 00:19:00.539 10:49:17 -- nvmf/common.sh@296 -- # local -ga x722 00:19:00.539 10:49:17 -- nvmf/common.sh@297 -- # mlx=() 00:19:00.539 10:49:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:00.539 10:49:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.539 10:49:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.539 10:49:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.539 10:49:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.539 10:49:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.539 10:49:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.539 10:49:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.539 10:49:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.539 10:49:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.539 10:49:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.539 10:49:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.539 10:49:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:00.539 10:49:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:00.539 10:49:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:00.539 10:49:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:00.539 10:49:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:00.539 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:00.539 10:49:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:00.539 10:49:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:00.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:00.539 10:49:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:00.539 10:49:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:00.539 10:49:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:00.539 10:49:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.540 10:49:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:00.540 10:49:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.540 10:49:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:00.540 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:00.540 10:49:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.540 10:49:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:00.540 10:49:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.540 10:49:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:00.540 10:49:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.540 10:49:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:00.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:00.540 10:49:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.540 10:49:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:00.540 10:49:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:00.540 10:49:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:00.540 10:49:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:00.540 10:49:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:00.540 10:49:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.540 10:49:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.540 10:49:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.540 10:49:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:00.540 10:49:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.540 10:49:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.540 10:49:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:00.540 10:49:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.540 10:49:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.540 10:49:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:00.540 10:49:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:00.540 10:49:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.797 10:49:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.797 10:49:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.798 10:49:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.798 10:49:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:00.798 10:49:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.798 10:49:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.798 10:49:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.798 10:49:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:00.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:19:00.798 00:19:00.798 --- 10.0.0.2 ping statistics --- 00:19:00.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.798 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:19:00.798 10:49:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:00.798 00:19:00.798 --- 10.0.0.1 ping statistics --- 00:19:00.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.798 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:00.798 10:49:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.798 10:49:17 -- nvmf/common.sh@410 -- # return 0 00:19:00.798 10:49:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:00.798 10:49:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.798 10:49:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:00.798 10:49:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:00.798 10:49:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.798 10:49:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:00.798 10:49:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:00.798 10:49:17 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:00.798 10:49:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:00.798 10:49:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:00.798 10:49:17 -- common/autotest_common.sh@10 -- # set +x 00:19:00.798 10:49:17 -- nvmf/common.sh@469 -- # nvmfpid=3468568 00:19:00.798 10:49:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:00.798 10:49:17 -- nvmf/common.sh@470 -- # waitforlisten 3468568 00:19:00.798 10:49:17 -- common/autotest_common.sh@819 -- # '[' -z 3468568 ']' 00:19:00.798 10:49:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.798 10:49:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:00.798 10:49:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.798 10:49:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:00.798 10:49:17 -- common/autotest_common.sh@10 -- # set +x 00:19:00.798 [2024-07-10 10:49:17.556989] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:00.798 [2024-07-10 10:49:17.557079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.798 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.055 [2024-07-10 10:49:17.622449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.055 [2024-07-10 10:49:17.711901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:01.055 [2024-07-10 10:49:17.712052] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.055 [2024-07-10 10:49:17.712083] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.055 [2024-07-10 10:49:17.712096] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.055 [2024-07-10 10:49:17.712157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.055 [2024-07-10 10:49:17.712216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.055 [2024-07-10 10:49:17.712266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.055 [2024-07-10 10:49:17.712268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.986 10:49:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:01.986 10:49:18 -- common/autotest_common.sh@852 -- # return 0 00:19:01.986 10:49:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:01.986 10:49:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:01.986 10:49:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.986 10:49:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.986 10:49:18 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:01.986 10:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.986 10:49:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.986 [2024-07-10 10:49:18.536027] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.986 10:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.986 10:49:18 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:01.986 10:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.986 10:49:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.986 Malloc0 00:19:01.986 10:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.986 10:49:18 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:01.986 10:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.986 10:49:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.986 10:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.986 10:49:18 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:01.986 10:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.986 10:49:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.986 10:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.986 10:49:18 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.986 10:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.986 10:49:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.986 [2024-07-10 10:49:18.587047] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.986 10:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.986 10:49:18 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:01.986 test case1: single bdev can't be used in multiple subsystems 00:19:01.986 10:49:18 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:01.986 10:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.986 10:49:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.986 10:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.986 10:49:18 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:01.986 10:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.986 10:49:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.986 10:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.986 10:49:18 -- target/nmic.sh@28 -- # nmic_status=0 00:19:01.986 10:49:18 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:01.986 10:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.986 10:49:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.986 [2024-07-10 10:49:18.610954] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:01.986 [2024-07-10 10:49:18.610982] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:01.986 [2024-07-10 10:49:18.611012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.986 request: 00:19:01.986 { 00:19:01.986 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:01.986 "namespace": { 00:19:01.986 "bdev_name": "Malloc0" 00:19:01.986 }, 00:19:01.986 "method": "nvmf_subsystem_add_ns", 00:19:01.986 "req_id": 1 00:19:01.986 } 00:19:01.986 Got JSON-RPC error response 00:19:01.986 response: 00:19:01.986 { 00:19:01.986 "code": -32602, 00:19:01.986 "message": "Invalid parameters" 00:19:01.986 } 00:19:01.986 10:49:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:01.986 10:49:18 -- target/nmic.sh@29 -- # nmic_status=1 00:19:01.986 10:49:18 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:01.986 10:49:18 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:01.986 Adding namespace failed - expected result. 00:19:01.986 10:49:18 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:01.986 test case2: host connect to nvmf target in multiple paths 00:19:01.986 10:49:18 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:01.986 10:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.986 10:49:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.986 [2024-07-10 10:49:18.619085] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:01.986 10:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.986 10:49:18 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.551 10:49:19 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:03.116 10:49:19 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:03.116 10:49:19 -- common/autotest_common.sh@1177 -- # local i=0 00:19:03.116 10:49:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.116 10:49:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:03.116 10:49:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:05.643 10:49:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:05.643 10:49:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:05.643 10:49:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:05.643 10:49:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:05.643 10:49:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.643 10:49:21 -- common/autotest_common.sh@1187 -- # return 0 00:19:05.643 10:49:21 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:05.643 [global] 00:19:05.643 thread=1 00:19:05.643 invalidate=1 00:19:05.643 rw=write 00:19:05.643 time_based=1 00:19:05.643 runtime=1 00:19:05.643 ioengine=libaio 00:19:05.643 direct=1 00:19:05.643 bs=4096 00:19:05.643 iodepth=1 00:19:05.643 norandommap=0 00:19:05.643 numjobs=1 00:19:05.643 00:19:05.643 verify_dump=1 00:19:05.643 verify_backlog=512 00:19:05.643 verify_state_save=0 00:19:05.643 do_verify=1 00:19:05.643 verify=crc32c-intel 00:19:05.643 [job0] 00:19:05.643 filename=/dev/nvme0n1 00:19:05.643 Could not set queue depth (nvme0n1) 00:19:05.643 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:05.643 fio-3.35 00:19:05.643 Starting 1 thread 00:19:06.576 00:19:06.576 job0: (groupid=0, jobs=1): err= 0: pid=3469223: Wed Jul 10 10:49:23 2024 00:19:06.576 read: IOPS=20, BW=80.8KiB/s (82.7kB/s)(84.0KiB/1040msec) 00:19:06.576 slat (nsec): min=13865, max=44756, avg=26371.00, stdev=10789.24 00:19:06.576 clat (usec): min=40949, max=42037, avg=41861.08, stdev=305.31 00:19:06.576 lat (usec): min=40986, max=42051, avg=41887.45, stdev=301.72 00:19:06.576 clat percentiles (usec): 00:19:06.576 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:19:06.576 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:06.576 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:06.576 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.576 | 99.99th=[42206] 00:19:06.576 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:19:06.576 slat (usec): min=7, max=29557, avg=72.14, stdev=1305.66 00:19:06.576 clat (usec): min=185, max=640, avg=235.65, stdev=33.19 00:19:06.576 lat (usec): min=192, max=29844, avg=307.79, stdev=1308.41 00:19:06.576 clat percentiles (usec): 00:19:06.576 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:19:06.576 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 239], 60.00th=[ 245], 00:19:06.576 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 273], 00:19:06.576 | 99.00th=[ 306], 99.50th=[ 347], 99.90th=[ 644], 99.95th=[ 644], 00:19:06.576 | 99.99th=[ 644] 00:19:06.576 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:06.576 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:06.576 lat (usec) : 250=65.10%, 500=30.77%, 750=0.19% 00:19:06.576 lat (msec) : 50=3.94% 00:19:06.576 cpu : usr=0.48%, sys=0.96%, ctx=536, majf=0, minf=2 00:19:06.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.576 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.576 00:19:06.576 Run status group 0 (all jobs): 00:19:06.576 READ: bw=80.8KiB/s (82.7kB/s), 80.8KiB/s-80.8KiB/s (82.7kB/s-82.7kB/s), io=84.0KiB (86.0kB), run=1040-1040msec 00:19:06.576 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:19:06.576 00:19:06.576 Disk stats (read/write): 00:19:06.576 nvme0n1: ios=43/512, merge=0/0, ticks=1697/119, in_queue=1816, util=98.70% 00:19:06.576 10:49:23 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:06.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:06.834 10:49:23 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:06.834 10:49:23 -- common/autotest_common.sh@1198 -- # local i=0 00:19:06.834 10:49:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:06.834 10:49:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.834 10:49:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:06.834 10:49:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.834 10:49:23 -- common/autotest_common.sh@1210 -- # return 0 00:19:06.834 10:49:23 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:06.834 10:49:23 -- target/nmic.sh@53 -- # nvmftestfini 00:19:06.834 10:49:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:06.834 10:49:23 -- nvmf/common.sh@116 -- # sync 00:19:06.834 10:49:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:06.834 10:49:23 -- nvmf/common.sh@119 -- # set +e 00:19:06.834 10:49:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:06.834 10:49:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:06.834 rmmod nvme_tcp 00:19:06.834 rmmod nvme_fabrics 00:19:06.834 rmmod nvme_keyring 00:19:06.834 10:49:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:06.834 10:49:23 -- nvmf/common.sh@123 -- # set -e 00:19:06.834 10:49:23 -- nvmf/common.sh@124 -- # return 0 00:19:06.834 10:49:23 -- nvmf/common.sh@477 -- # '[' -n 3468568 ']' 00:19:06.834 10:49:23 -- nvmf/common.sh@478 -- # killprocess 3468568 00:19:06.834 10:49:23 -- common/autotest_common.sh@926 -- # '[' -z 3468568 ']' 00:19:06.834 10:49:23 -- common/autotest_common.sh@930 -- # kill -0 3468568 00:19:06.834 10:49:23 -- common/autotest_common.sh@931 -- # uname 00:19:06.834 10:49:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:06.834 10:49:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3468568 00:19:06.834 10:49:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:06.834 10:49:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:06.834 10:49:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3468568' 00:19:06.834 killing process with pid 3468568 00:19:06.834 10:49:23 -- common/autotest_common.sh@945 -- # kill 3468568 00:19:06.834 10:49:23 -- common/autotest_common.sh@950 -- # wait 3468568 00:19:07.093 10:49:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:07.093 10:49:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:07.093 10:49:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:07.093 10:49:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.093 10:49:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:07.093 10:49:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.093 10:49:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.093 10:49:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.628 10:49:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:09.628 00:19:09.628 real 0m10.520s 00:19:09.628 user 0m25.169s 00:19:09.628 sys 0m2.383s 00:19:09.628 10:49:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.628 10:49:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.628 ************************************ 00:19:09.628 END TEST nvmf_nmic 00:19:09.628 ************************************ 00:19:09.628 10:49:25 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:09.628 10:49:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:09.628 10:49:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:09.628 10:49:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.628 ************************************ 00:19:09.628 START TEST nvmf_fio_target 00:19:09.628 ************************************ 00:19:09.628 10:49:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:09.628 * Looking for test storage... 00:19:09.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.628 10:49:25 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.628 10:49:25 -- nvmf/common.sh@7 -- # uname -s 00:19:09.629 10:49:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.629 10:49:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.629 10:49:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.629 10:49:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.629 10:49:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.629 10:49:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.629 10:49:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.629 10:49:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.629 10:49:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.629 10:49:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.629 10:49:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.629 10:49:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.629 10:49:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.629 10:49:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.629 10:49:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.629 10:49:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.629 10:49:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.629 10:49:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.629 10:49:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.629 10:49:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.629 10:49:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.629 10:49:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.629 10:49:25 -- paths/export.sh@5 -- # export PATH 00:19:09.629 10:49:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.629 10:49:25 -- nvmf/common.sh@46 -- # : 0 00:19:09.629 10:49:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:09.629 10:49:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:09.629 10:49:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:09.629 10:49:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.629 10:49:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.629 10:49:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:09.629 10:49:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:09.629 10:49:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:09.629 10:49:25 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:09.629 10:49:25 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:09.629 10:49:25 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.629 10:49:25 -- target/fio.sh@16 -- # nvmftestinit 00:19:09.629 10:49:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:09.629 10:49:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.629 10:49:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:09.629 10:49:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:09.629 10:49:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:09.629 10:49:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.629 10:49:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.629 10:49:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.629 10:49:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:09.629 10:49:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:09.629 10:49:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:09.629 10:49:25 -- common/autotest_common.sh@10 -- # set +x 00:19:11.530 10:49:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:11.530 10:49:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:11.530 10:49:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:11.530 10:49:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:11.530 10:49:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:11.530 10:49:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:11.530 10:49:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:11.530 10:49:27 -- nvmf/common.sh@294 -- # net_devs=() 00:19:11.530 10:49:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:11.530 10:49:27 -- nvmf/common.sh@295 -- # e810=() 00:19:11.530 10:49:27 -- nvmf/common.sh@295 -- # local -ga e810 00:19:11.530 10:49:27 -- nvmf/common.sh@296 -- # x722=() 00:19:11.530 10:49:27 -- nvmf/common.sh@296 -- # local -ga x722 00:19:11.530 10:49:27 -- nvmf/common.sh@297 -- # mlx=() 00:19:11.530 10:49:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:11.530 10:49:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.530 10:49:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.530 10:49:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.530 10:49:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.530 10:49:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.530 10:49:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.530 10:49:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.530 10:49:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.530 10:49:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.530 10:49:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.530 10:49:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.530 10:49:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:11.530 10:49:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:11.530 10:49:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:11.530 10:49:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:11.530 10:49:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:11.530 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:11.530 10:49:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:11.530 10:49:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:11.530 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:11.530 10:49:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:11.530 10:49:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:11.530 10:49:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.530 10:49:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:11.530 10:49:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.530 10:49:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:11.530 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:11.530 10:49:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.530 10:49:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:11.530 10:49:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.530 10:49:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:11.530 10:49:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.530 10:49:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:11.530 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:11.530 10:49:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.530 10:49:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:11.530 10:49:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:11.530 10:49:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:11.530 10:49:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:11.530 10:49:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.530 10:49:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.530 10:49:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.530 10:49:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:11.530 10:49:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.530 10:49:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.531 10:49:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:11.531 10:49:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.531 10:49:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.531 10:49:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:11.531 10:49:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:11.531 10:49:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.531 10:49:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.531 10:49:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.531 10:49:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.531 10:49:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:11.531 10:49:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.531 10:49:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.531 10:49:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.531 10:49:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:11.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:19:11.531 00:19:11.531 --- 10.0.0.2 ping statistics --- 00:19:11.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.531 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:11.531 10:49:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:19:11.531 00:19:11.531 --- 10.0.0.1 ping statistics --- 00:19:11.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.531 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:19:11.531 10:49:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.531 10:49:28 -- nvmf/common.sh@410 -- # return 0 00:19:11.531 10:49:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:11.531 10:49:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.531 10:49:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:11.531 10:49:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:11.531 10:49:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.531 10:49:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:11.531 10:49:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:11.531 10:49:28 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:11.531 10:49:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:11.531 10:49:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:11.531 10:49:28 -- common/autotest_common.sh@10 -- # set +x 00:19:11.531 10:49:28 -- nvmf/common.sh@469 -- # nvmfpid=3471429 00:19:11.531 10:49:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:11.531 10:49:28 -- nvmf/common.sh@470 -- # waitforlisten 3471429 00:19:11.531 10:49:28 -- common/autotest_common.sh@819 -- # '[' -z 3471429 ']' 00:19:11.531 10:49:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.531 10:49:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:11.531 10:49:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.531 10:49:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:11.531 10:49:28 -- common/autotest_common.sh@10 -- # set +x 00:19:11.531 [2024-07-10 10:49:28.151087] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:11.531 [2024-07-10 10:49:28.151157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.531 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.531 [2024-07-10 10:49:28.219253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.531 [2024-07-10 10:49:28.314245] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:11.531 [2024-07-10 10:49:28.314418] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.531 [2024-07-10 10:49:28.314447] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.531 [2024-07-10 10:49:28.314461] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.531 [2024-07-10 10:49:28.314527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.531 [2024-07-10 10:49:28.314581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.531 [2024-07-10 10:49:28.314631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.531 [2024-07-10 10:49:28.314634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.463 10:49:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:12.463 10:49:29 -- common/autotest_common.sh@852 -- # return 0 00:19:12.463 10:49:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:12.463 10:49:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:12.463 10:49:29 -- common/autotest_common.sh@10 -- # set +x 00:19:12.463 10:49:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.463 10:49:29 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:12.721 [2024-07-10 10:49:29.444186] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.721 10:49:29 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:12.979 10:49:29 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:12.979 10:49:29 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.237 10:49:29 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:13.237 10:49:29 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.494 10:49:30 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:13.494 10:49:30 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.753 10:49:30 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:13.753 10:49:30 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:14.010 10:49:30 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.268 10:49:30 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:14.268 10:49:30 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.526 10:49:31 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:14.526 10:49:31 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.783 10:49:31 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:14.784 10:49:31 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:15.041 10:49:31 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:15.298 10:49:31 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:15.298 10:49:31 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:15.555 10:49:32 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:15.555 10:49:32 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:15.813 10:49:32 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.070 [2024-07-10 10:49:32.650830] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.070 10:49:32 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:16.329 10:49:32 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:16.329 10:49:33 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:17.260 10:49:33 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:17.260 10:49:33 -- common/autotest_common.sh@1177 -- # local i=0 00:19:17.260 10:49:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:17.260 10:49:33 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:17.260 10:49:33 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:17.260 10:49:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:19.156 10:49:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:19.156 10:49:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:19.156 10:49:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:19.156 10:49:35 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:19.156 10:49:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:19.156 10:49:35 -- common/autotest_common.sh@1187 -- # return 0 00:19:19.156 10:49:35 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:19.156 [global] 00:19:19.156 thread=1 00:19:19.156 invalidate=1 00:19:19.156 rw=write 00:19:19.156 time_based=1 00:19:19.156 runtime=1 00:19:19.156 ioengine=libaio 00:19:19.156 direct=1 00:19:19.156 bs=4096 00:19:19.156 iodepth=1 00:19:19.156 norandommap=0 00:19:19.156 numjobs=1 00:19:19.156 00:19:19.156 verify_dump=1 00:19:19.156 verify_backlog=512 00:19:19.156 verify_state_save=0 00:19:19.156 do_verify=1 00:19:19.156 verify=crc32c-intel 00:19:19.156 [job0] 00:19:19.156 filename=/dev/nvme0n1 00:19:19.156 [job1] 00:19:19.156 filename=/dev/nvme0n2 00:19:19.156 [job2] 00:19:19.156 filename=/dev/nvme0n3 00:19:19.156 [job3] 00:19:19.156 filename=/dev/nvme0n4 00:19:19.156 Could not set queue depth (nvme0n1) 00:19:19.156 Could not set queue depth (nvme0n2) 00:19:19.156 Could not set queue depth (nvme0n3) 00:19:19.156 Could not set queue depth (nvme0n4) 00:19:19.413 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.413 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.413 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.414 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.414 fio-3.35 00:19:19.414 Starting 4 threads 00:19:20.786 00:19:20.786 job0: (groupid=0, jobs=1): err= 0: pid=3472542: Wed Jul 10 10:49:37 2024 00:19:20.786 read: IOPS=502, BW=2012KiB/s (2060kB/s)(2068KiB/1028msec) 00:19:20.786 slat (nsec): min=9245, max=61133, avg=27958.87, stdev=7755.81 00:19:20.786 clat (usec): min=398, max=41290, avg=1027.19, stdev=3959.21 00:19:20.786 lat (usec): min=421, max=41300, avg=1055.15, stdev=3957.94 00:19:20.786 clat percentiles (usec): 00:19:20.786 | 1.00th=[ 424], 5.00th=[ 453], 10.00th=[ 486], 20.00th=[ 545], 00:19:20.786 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 652], 00:19:20.786 | 70.00th=[ 685], 80.00th=[ 734], 90.00th=[ 816], 95.00th=[ 873], 00:19:20.786 | 99.00th=[ 988], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:20.786 | 99.99th=[41157] 00:19:20.786 write: IOPS=996, BW=3984KiB/s (4080kB/s)(4096KiB/1028msec); 0 zone resets 00:19:20.786 slat (usec): min=9, max=20624, avg=50.01, stdev=643.67 00:19:20.786 clat (usec): min=212, max=2162, avg=408.41, stdev=115.12 00:19:20.786 lat (usec): min=224, max=20899, avg=458.41, stdev=649.43 00:19:20.786 clat percentiles (usec): 00:19:20.786 | 1.00th=[ 233], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 330], 00:19:20.786 | 30.00th=[ 359], 40.00th=[ 383], 50.00th=[ 400], 60.00th=[ 420], 00:19:20.786 | 70.00th=[ 441], 80.00th=[ 465], 90.00th=[ 519], 95.00th=[ 553], 00:19:20.786 | 99.00th=[ 644], 99.50th=[ 685], 99.90th=[ 1614], 99.95th=[ 2180], 00:19:20.786 | 99.99th=[ 2180] 00:19:20.786 bw ( KiB/s): min= 4096, max= 4096, per=25.67%, avg=4096.00, stdev= 0.00, samples=2 00:19:20.786 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:19:20.786 lat (usec) : 250=1.56%, 500=60.67%, 750=31.86%, 1000=5.32% 00:19:20.786 lat (msec) : 2=0.19%, 4=0.06%, 50=0.32% 00:19:20.786 cpu : usr=3.70%, sys=5.16%, ctx=1544, majf=0, minf=1 00:19:20.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.786 issued rwts: total=517,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:20.786 job1: (groupid=0, jobs=1): err= 0: pid=3472543: Wed Jul 10 10:49:37 2024 00:19:20.786 read: IOPS=1428, BW=5714KiB/s (5851kB/s)(5720KiB/1001msec) 00:19:20.786 slat (nsec): min=5891, max=70075, avg=22581.77, stdev=12240.71 00:19:20.786 clat (usec): min=267, max=6141, avg=402.38, stdev=160.37 00:19:20.786 lat (usec): min=276, max=6150, avg=424.97, stdev=162.31 00:19:20.786 clat percentiles (usec): 00:19:20.786 | 1.00th=[ 281], 5.00th=[ 306], 10.00th=[ 334], 20.00th=[ 359], 00:19:20.786 | 30.00th=[ 375], 40.00th=[ 388], 50.00th=[ 404], 60.00th=[ 416], 00:19:20.786 | 70.00th=[ 424], 80.00th=[ 433], 90.00th=[ 449], 95.00th=[ 469], 00:19:20.786 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 1020], 99.95th=[ 6128], 00:19:20.786 | 99.99th=[ 6128] 00:19:20.786 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:20.786 slat (usec): min=6, max=16858, avg=24.84, stdev=429.83 00:19:20.786 clat (usec): min=168, max=447, avg=217.77, stdev=29.34 00:19:20.786 lat (usec): min=176, max=17305, avg=242.62, stdev=436.62 00:19:20.786 clat percentiles (usec): 00:19:20.786 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:19:20.786 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 221], 00:19:20.786 | 70.00th=[ 231], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 269], 00:19:20.786 | 99.00th=[ 306], 99.50th=[ 334], 99.90th=[ 383], 99.95th=[ 449], 00:19:20.786 | 99.99th=[ 449] 00:19:20.786 bw ( KiB/s): min= 8032, max= 8032, per=50.33%, avg=8032.00, stdev= 0.00, samples=1 00:19:20.786 iops : min= 2008, max= 2008, avg=2008.00, stdev= 0.00, samples=1 00:19:20.786 lat (usec) : 250=45.35%, 500=53.37%, 750=1.21% 00:19:20.786 lat (msec) : 2=0.03%, 10=0.03% 00:19:20.786 cpu : usr=2.50%, sys=6.30%, ctx=2968, majf=0, minf=1 00:19:20.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.786 issued rwts: total=1430,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:20.786 job2: (groupid=0, jobs=1): err= 0: pid=3472544: Wed Jul 10 10:49:37 2024 00:19:20.786 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:20.786 slat (nsec): min=5822, max=62088, avg=22138.91, stdev=11174.70 00:19:20.786 clat (usec): min=314, max=2269, avg=529.84, stdev=164.71 00:19:20.786 lat (usec): min=325, max=2287, avg=551.97, stdev=170.11 00:19:20.786 clat percentiles (usec): 00:19:20.786 | 1.00th=[ 326], 5.00th=[ 347], 10.00th=[ 363], 20.00th=[ 383], 00:19:20.786 | 30.00th=[ 416], 40.00th=[ 461], 50.00th=[ 519], 60.00th=[ 562], 00:19:20.786 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 709], 95.00th=[ 775], 00:19:20.786 | 99.00th=[ 947], 99.50th=[ 1037], 99.90th=[ 1876], 99.95th=[ 2278], 00:19:20.786 | 99.99th=[ 2278] 00:19:20.786 write: IOPS=1027, BW=4112KiB/s (4211kB/s)(4116KiB/1001msec); 0 zone resets 00:19:20.786 slat (nsec): min=6590, max=77749, avg=29317.51, stdev=12653.03 00:19:20.786 clat (usec): min=201, max=799, avg=379.14, stdev=76.79 00:19:20.786 lat (usec): min=211, max=815, avg=408.45, stdev=80.22 00:19:20.786 clat percentiles (usec): 00:19:20.786 | 1.00th=[ 215], 5.00th=[ 233], 10.00th=[ 253], 20.00th=[ 306], 00:19:20.786 | 30.00th=[ 351], 40.00th=[ 375], 50.00th=[ 392], 60.00th=[ 408], 00:19:20.786 | 70.00th=[ 424], 80.00th=[ 445], 90.00th=[ 465], 95.00th=[ 486], 00:19:20.786 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 553], 99.95th=[ 799], 00:19:20.786 | 99.99th=[ 799] 00:19:20.786 bw ( KiB/s): min= 4096, max= 4096, per=25.67%, avg=4096.00, stdev= 0.00, samples=1 00:19:20.786 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:20.786 lat (usec) : 250=4.58%, 500=67.46%, 750=24.01%, 1000=3.56% 00:19:20.786 lat (msec) : 2=0.34%, 4=0.05% 00:19:20.786 cpu : usr=3.00%, sys=5.30%, ctx=2053, majf=0, minf=2 00:19:20.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.786 issued rwts: total=1024,1029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:20.786 job3: (groupid=0, jobs=1): err= 0: pid=3472545: Wed Jul 10 10:49:37 2024 00:19:20.786 read: IOPS=443, BW=1774KiB/s (1817kB/s)(1776KiB/1001msec) 00:19:20.786 slat (nsec): min=7158, max=33843, avg=9452.79, stdev=4352.19 00:19:20.786 clat (usec): min=275, max=41196, avg=1958.50, stdev=8030.35 00:19:20.786 lat (usec): min=283, max=41211, avg=1967.96, stdev=8034.11 00:19:20.786 clat percentiles (usec): 00:19:20.786 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:19:20.786 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 314], 00:19:20.786 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 351], 00:19:20.786 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:20.786 | 99.99th=[41157] 00:19:20.786 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:20.786 slat (nsec): min=6421, max=33954, avg=12098.38, stdev=6791.86 00:19:20.786 clat (usec): min=191, max=324, avg=222.28, stdev=15.98 00:19:20.786 lat (usec): min=198, max=348, avg=234.38, stdev=18.18 00:19:20.786 clat percentiles (usec): 00:19:20.786 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:19:20.786 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 227], 00:19:20.786 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 247], 00:19:20.786 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 326], 99.95th=[ 326], 00:19:20.786 | 99.99th=[ 326] 00:19:20.786 bw ( KiB/s): min= 4096, max= 4096, per=25.67%, avg=4096.00, stdev= 0.00, samples=1 00:19:20.786 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:20.786 lat (usec) : 250=51.57%, 500=46.55% 00:19:20.786 lat (msec) : 50=1.88% 00:19:20.786 cpu : usr=0.50%, sys=1.30%, ctx=958, majf=0, minf=1 00:19:20.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.786 issued rwts: total=444,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:20.786 00:19:20.786 Run status group 0 (all jobs): 00:19:20.787 READ: bw=13.0MiB/s (13.6MB/s), 1774KiB/s-5714KiB/s (1817kB/s-5851kB/s), io=13.3MiB (14.0MB), run=1001-1028msec 00:19:20.787 WRITE: bw=15.6MiB/s (16.3MB/s), 2046KiB/s-6138KiB/s (2095kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1028msec 00:19:20.787 00:19:20.787 Disk stats (read/write): 00:19:20.787 nvme0n1: ios=537/1024, merge=0/0, ticks=1300/392, in_queue=1692, util=98.00% 00:19:20.787 nvme0n2: ios=1059/1536, merge=0/0, ticks=1362/324, in_queue=1686, util=97.86% 00:19:20.787 nvme0n3: ios=713/1024, merge=0/0, ticks=565/367, in_queue=932, util=99.69% 00:19:20.787 nvme0n4: ios=74/512, merge=0/0, ticks=1675/107, in_queue=1782, util=97.68% 00:19:20.787 10:49:37 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:20.787 [global] 00:19:20.787 thread=1 00:19:20.787 invalidate=1 00:19:20.787 rw=randwrite 00:19:20.787 time_based=1 00:19:20.787 runtime=1 00:19:20.787 ioengine=libaio 00:19:20.787 direct=1 00:19:20.787 bs=4096 00:19:20.787 iodepth=1 00:19:20.787 norandommap=0 00:19:20.787 numjobs=1 00:19:20.787 00:19:20.787 verify_dump=1 00:19:20.787 verify_backlog=512 00:19:20.787 verify_state_save=0 00:19:20.787 do_verify=1 00:19:20.787 verify=crc32c-intel 00:19:20.787 [job0] 00:19:20.787 filename=/dev/nvme0n1 00:19:20.787 [job1] 00:19:20.787 filename=/dev/nvme0n2 00:19:20.787 [job2] 00:19:20.787 filename=/dev/nvme0n3 00:19:20.787 [job3] 00:19:20.787 filename=/dev/nvme0n4 00:19:20.787 Could not set queue depth (nvme0n1) 00:19:20.787 Could not set queue depth (nvme0n2) 00:19:20.787 Could not set queue depth (nvme0n3) 00:19:20.787 Could not set queue depth (nvme0n4) 00:19:20.787 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.787 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.787 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.787 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.787 fio-3.35 00:19:20.787 Starting 4 threads 00:19:22.158 00:19:22.158 job0: (groupid=0, jobs=1): err= 0: pid=3472777: Wed Jul 10 10:49:38 2024 00:19:22.158 read: IOPS=367, BW=1471KiB/s (1506kB/s)(1472KiB/1001msec) 00:19:22.158 slat (nsec): min=5207, max=39035, avg=10944.86, stdev=6768.13 00:19:22.158 clat (usec): min=258, max=41923, avg=2309.00, stdev=8805.23 00:19:22.158 lat (usec): min=264, max=41957, avg=2319.95, stdev=8808.71 00:19:22.158 clat percentiles (usec): 00:19:22.158 | 1.00th=[ 262], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 277], 00:19:22.158 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:19:22.158 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 486], 95.00th=[ 1696], 00:19:22.158 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:22.158 | 99.99th=[41681] 00:19:22.158 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:22.158 slat (nsec): min=10814, max=52841, avg=19958.05, stdev=5903.69 00:19:22.158 clat (usec): min=202, max=486, avg=257.95, stdev=28.43 00:19:22.158 lat (usec): min=215, max=502, avg=277.91, stdev=30.03 00:19:22.158 clat percentiles (usec): 00:19:22.158 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:19:22.158 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:19:22.158 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 306], 00:19:22.158 | 99.00th=[ 359], 99.50th=[ 420], 99.90th=[ 486], 99.95th=[ 486], 00:19:22.158 | 99.99th=[ 486] 00:19:22.158 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=1 00:19:22.158 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:22.158 lat (usec) : 250=26.14%, 500=70.57%, 750=1.14% 00:19:22.158 lat (msec) : 2=0.11%, 50=2.05% 00:19:22.158 cpu : usr=1.30%, sys=1.60%, ctx=881, majf=0, minf=1 00:19:22.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.158 issued rwts: total=368,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.158 job1: (groupid=0, jobs=1): err= 0: pid=3472778: Wed Jul 10 10:49:38 2024 00:19:22.158 read: IOPS=490, BW=1963KiB/s (2010kB/s)(2012KiB/1025msec) 00:19:22.158 slat (nsec): min=7938, max=52757, avg=15204.66, stdev=5475.29 00:19:22.158 clat (usec): min=289, max=41073, avg=1739.52, stdev=7344.96 00:19:22.158 lat (usec): min=297, max=41092, avg=1754.73, stdev=7346.81 00:19:22.158 clat percentiles (usec): 00:19:22.158 | 1.00th=[ 306], 5.00th=[ 326], 10.00th=[ 347], 20.00th=[ 355], 00:19:22.158 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 371], 00:19:22.158 | 70.00th=[ 375], 80.00th=[ 379], 90.00th=[ 388], 95.00th=[ 424], 00:19:22.158 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:22.158 | 99.99th=[41157] 00:19:22.158 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:19:22.158 slat (nsec): min=7529, max=53258, avg=18446.03, stdev=6539.62 00:19:22.158 clat (usec): min=181, max=428, avg=248.39, stdev=36.57 00:19:22.158 lat (usec): min=194, max=446, avg=266.83, stdev=38.71 00:19:22.158 clat percentiles (usec): 00:19:22.158 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 221], 00:19:22.158 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 251], 00:19:22.158 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 322], 00:19:22.158 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 429], 99.95th=[ 429], 00:19:22.158 | 99.99th=[ 429] 00:19:22.158 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=1 00:19:22.158 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:22.158 lat (usec) : 250=29.85%, 500=67.98%, 750=0.49% 00:19:22.158 lat (msec) : 50=1.67% 00:19:22.158 cpu : usr=1.27%, sys=2.25%, ctx=1015, majf=0, minf=2 00:19:22.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.158 issued rwts: total=503,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.158 job2: (groupid=0, jobs=1): err= 0: pid=3472779: Wed Jul 10 10:49:38 2024 00:19:22.158 read: IOPS=690, BW=2760KiB/s (2827kB/s)(2788KiB/1010msec) 00:19:22.158 slat (nsec): min=5520, max=58416, avg=16839.93, stdev=10047.17 00:19:22.158 clat (usec): min=246, max=42042, avg=1054.41, stdev=5428.78 00:19:22.158 lat (usec): min=252, max=42077, avg=1071.25, stdev=5429.99 00:19:22.158 clat percentiles (usec): 00:19:22.158 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:19:22.158 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 314], 00:19:22.158 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 367], 95.00th=[ 388], 00:19:22.158 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:22.158 | 99.99th=[42206] 00:19:22.159 write: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec); 0 zone resets 00:19:22.159 slat (nsec): min=8581, max=62179, avg=19806.89, stdev=7403.41 00:19:22.159 clat (usec): min=169, max=424, avg=228.47, stdev=43.39 00:19:22.159 lat (usec): min=178, max=466, avg=248.27, stdev=46.70 00:19:22.159 clat percentiles (usec): 00:19:22.159 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 190], 00:19:22.159 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 229], 60.00th=[ 241], 00:19:22.159 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 302], 00:19:22.159 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 408], 99.95th=[ 424], 00:19:22.159 | 99.99th=[ 424] 00:19:22.159 bw ( KiB/s): min= 4096, max= 4096, per=34.17%, avg=4096.00, stdev= 0.00, samples=2 00:19:22.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:19:22.159 lat (usec) : 250=42.48%, 500=56.42%, 750=0.35% 00:19:22.159 lat (msec) : 50=0.76% 00:19:22.159 cpu : usr=1.59%, sys=3.67%, ctx=1724, majf=0, minf=1 00:19:22.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.159 issued rwts: total=697,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.159 job3: (groupid=0, jobs=1): err= 0: pid=3472780: Wed Jul 10 10:49:38 2024 00:19:22.159 read: IOPS=738, BW=2954KiB/s (3025kB/s)(2960KiB/1002msec) 00:19:22.159 slat (nsec): min=5526, max=54914, avg=19272.55, stdev=10330.01 00:19:22.159 clat (usec): min=251, max=41348, avg=983.45, stdev=5143.39 00:19:22.159 lat (usec): min=262, max=41366, avg=1002.72, stdev=5144.11 00:19:22.159 clat percentiles (usec): 00:19:22.159 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:19:22.159 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 326], 00:19:22.159 | 70.00th=[ 343], 80.00th=[ 371], 90.00th=[ 388], 95.00th=[ 416], 00:19:22.159 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:22.159 | 99.99th=[41157] 00:19:22.159 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:19:22.159 slat (nsec): min=7959, max=71903, avg=17439.63, stdev=7543.44 00:19:22.159 clat (usec): min=170, max=431, avg=227.49, stdev=42.28 00:19:22.159 lat (usec): min=179, max=463, avg=244.93, stdev=44.16 00:19:22.159 clat percentiles (usec): 00:19:22.159 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:19:22.159 | 30.00th=[ 192], 40.00th=[ 204], 50.00th=[ 231], 60.00th=[ 237], 00:19:22.159 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 306], 00:19:22.159 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 404], 99.95th=[ 433], 00:19:22.159 | 99.99th=[ 433] 00:19:22.159 bw ( KiB/s): min= 8192, max= 8192, per=68.33%, avg=8192.00, stdev= 0.00, samples=1 00:19:22.159 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:22.159 lat (usec) : 250=44.67%, 500=54.14%, 750=0.51% 00:19:22.159 lat (msec) : 50=0.68% 00:19:22.159 cpu : usr=2.10%, sys=2.90%, ctx=1765, majf=0, minf=1 00:19:22.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.159 issued rwts: total=740,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.159 00:19:22.159 Run status group 0 (all jobs): 00:19:22.159 READ: bw=9007KiB/s (9223kB/s), 1471KiB/s-2954KiB/s (1506kB/s-3025kB/s), io=9232KiB (9454kB), run=1001-1025msec 00:19:22.159 WRITE: bw=11.7MiB/s (12.3MB/s), 1998KiB/s-4088KiB/s (2046kB/s-4186kB/s), io=12.0MiB (12.6MB), run=1001-1025msec 00:19:22.159 00:19:22.159 Disk stats (read/write): 00:19:22.159 nvme0n1: ios=48/512, merge=0/0, ticks=1654/128, in_queue=1782, util=98.00% 00:19:22.159 nvme0n2: ios=513/512, merge=0/0, ticks=676/114, in_queue=790, util=86.69% 00:19:22.159 nvme0n3: ios=606/1024, merge=0/0, ticks=1560/215, in_queue=1775, util=98.12% 00:19:22.159 nvme0n4: ios=793/1024, merge=0/0, ticks=1406/224, in_queue=1630, util=98.11% 00:19:22.159 10:49:38 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:22.159 [global] 00:19:22.159 thread=1 00:19:22.159 invalidate=1 00:19:22.159 rw=write 00:19:22.159 time_based=1 00:19:22.159 runtime=1 00:19:22.159 ioengine=libaio 00:19:22.159 direct=1 00:19:22.159 bs=4096 00:19:22.159 iodepth=128 00:19:22.159 norandommap=0 00:19:22.159 numjobs=1 00:19:22.159 00:19:22.159 verify_dump=1 00:19:22.159 verify_backlog=512 00:19:22.159 verify_state_save=0 00:19:22.159 do_verify=1 00:19:22.159 verify=crc32c-intel 00:19:22.159 [job0] 00:19:22.159 filename=/dev/nvme0n1 00:19:22.159 [job1] 00:19:22.159 filename=/dev/nvme0n2 00:19:22.159 [job2] 00:19:22.159 filename=/dev/nvme0n3 00:19:22.159 [job3] 00:19:22.159 filename=/dev/nvme0n4 00:19:22.159 Could not set queue depth (nvme0n1) 00:19:22.159 Could not set queue depth (nvme0n2) 00:19:22.159 Could not set queue depth (nvme0n3) 00:19:22.159 Could not set queue depth (nvme0n4) 00:19:22.159 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.159 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.159 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.159 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.159 fio-3.35 00:19:22.159 Starting 4 threads 00:19:23.533 00:19:23.533 job0: (groupid=0, jobs=1): err= 0: pid=3473016: Wed Jul 10 10:49:40 2024 00:19:23.533 read: IOPS=1340, BW=5364KiB/s (5493kB/s)(5380KiB/1003msec) 00:19:23.533 slat (usec): min=3, max=31411, avg=383.03, stdev=2539.12 00:19:23.533 clat (usec): min=1544, max=101246, avg=46023.83, stdev=25717.70 00:19:23.533 lat (msec): min=4, max=101, avg=46.41, stdev=25.88 00:19:23.533 clat percentiles (msec): 00:19:23.533 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 14], 00:19:23.533 | 30.00th=[ 29], 40.00th=[ 38], 50.00th=[ 53], 60.00th=[ 57], 00:19:23.533 | 70.00th=[ 69], 80.00th=[ 74], 90.00th=[ 78], 95.00th=[ 81], 00:19:23.533 | 99.00th=[ 86], 99.50th=[ 86], 99.90th=[ 86], 99.95th=[ 102], 00:19:23.533 | 99.99th=[ 102] 00:19:23.533 write: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec); 0 zone resets 00:19:23.533 slat (usec): min=4, max=24917, avg=305.01, stdev=1911.64 00:19:23.533 clat (usec): min=9509, max=88904, avg=39604.23, stdev=23322.13 00:19:23.533 lat (usec): min=9757, max=88914, avg=39909.23, stdev=23432.13 00:19:23.533 clat percentiles (usec): 00:19:23.533 | 1.00th=[11863], 5.00th=[13304], 10.00th=[13566], 20.00th=[16319], 00:19:23.533 | 30.00th=[18482], 40.00th=[27395], 50.00th=[33817], 60.00th=[44303], 00:19:23.533 | 70.00th=[50070], 80.00th=[66847], 90.00th=[74974], 95.00th=[82314], 00:19:23.533 | 99.00th=[88605], 99.50th=[88605], 99.90th=[88605], 99.95th=[88605], 00:19:23.533 | 99.99th=[88605] 00:19:23.533 bw ( KiB/s): min= 4096, max= 8192, per=9.72%, avg=6144.00, stdev=2896.31, samples=2 00:19:23.533 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:19:23.533 lat (msec) : 2=0.03%, 10=2.95%, 20=27.42%, 50=27.87%, 100=41.69% 00:19:23.533 lat (msec) : 250=0.03% 00:19:23.533 cpu : usr=1.90%, sys=3.79%, ctx=146, majf=0, minf=17 00:19:23.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:19:23.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.533 issued rwts: total=1345,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.533 job1: (groupid=0, jobs=1): err= 0: pid=3473017: Wed Jul 10 10:49:40 2024 00:19:23.533 read: IOPS=5274, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1003msec) 00:19:23.533 slat (usec): min=3, max=15073, avg=100.14, stdev=773.29 00:19:23.533 clat (usec): min=2336, max=47507, avg=13620.93, stdev=7797.88 00:19:23.533 lat (usec): min=2351, max=47546, avg=13721.07, stdev=7850.75 00:19:23.533 clat percentiles (usec): 00:19:23.533 | 1.00th=[ 5735], 5.00th=[ 6915], 10.00th=[ 7898], 20.00th=[ 8979], 00:19:23.533 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[11076], 60.00th=[11600], 00:19:23.533 | 70.00th=[12780], 80.00th=[15795], 90.00th=[26608], 95.00th=[29754], 00:19:23.533 | 99.00th=[41157], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:19:23.533 | 99.99th=[47449] 00:19:23.533 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:19:23.533 slat (usec): min=3, max=12215, avg=70.16, stdev=510.53 00:19:23.533 clat (usec): min=374, max=33904, avg=9741.75, stdev=4266.49 00:19:23.533 lat (usec): min=388, max=33922, avg=9811.91, stdev=4281.31 00:19:23.533 clat percentiles (usec): 00:19:23.533 | 1.00th=[ 676], 5.00th=[ 5014], 10.00th=[ 5407], 20.00th=[ 6652], 00:19:23.533 | 30.00th=[ 7767], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10159], 00:19:23.533 | 70.00th=[10683], 80.00th=[12387], 90.00th=[13698], 95.00th=[16057], 00:19:23.533 | 99.00th=[31065], 99.50th=[31327], 99.90th=[33817], 99.95th=[33817], 00:19:23.533 | 99.99th=[33817] 00:19:23.533 bw ( KiB/s): min=20528, max=24528, per=35.63%, avg=22528.00, stdev=2828.43, samples=2 00:19:23.533 iops : min= 5132, max= 6132, avg=5632.00, stdev=707.11, samples=2 00:19:23.533 lat (usec) : 500=0.01%, 750=0.86%, 1000=0.27% 00:19:23.533 lat (msec) : 2=0.05%, 4=0.45%, 10=46.49%, 20=44.07%, 50=7.80% 00:19:23.533 cpu : usr=8.08%, sys=12.97%, ctx=330, majf=0, minf=11 00:19:23.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:23.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.533 issued rwts: total=5290,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.533 job2: (groupid=0, jobs=1): err= 0: pid=3473018: Wed Jul 10 10:49:40 2024 00:19:23.533 read: IOPS=3291, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1004msec) 00:19:23.533 slat (usec): min=2, max=15982, avg=120.47, stdev=911.76 00:19:23.533 clat (usec): min=826, max=54862, avg=16943.60, stdev=5884.21 00:19:23.533 lat (usec): min=839, max=58370, avg=17064.07, stdev=5940.61 00:19:23.533 clat percentiles (usec): 00:19:23.533 | 1.00th=[ 6456], 5.00th=[10290], 10.00th=[11076], 20.00th=[12649], 00:19:23.533 | 30.00th=[14091], 40.00th=[14615], 50.00th=[15664], 60.00th=[16581], 00:19:23.533 | 70.00th=[17957], 80.00th=[20317], 90.00th=[26084], 95.00th=[28705], 00:19:23.533 | 99.00th=[31589], 99.50th=[31589], 99.90th=[54789], 99.95th=[54789], 00:19:23.533 | 99.99th=[54789] 00:19:23.533 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:19:23.533 slat (usec): min=3, max=35382, avg=123.94, stdev=1024.40 00:19:23.533 clat (usec): min=622, max=55544, avg=18670.15, stdev=10655.94 00:19:23.533 lat (usec): min=634, max=55550, avg=18794.09, stdev=10739.23 00:19:23.533 clat percentiles (usec): 00:19:23.533 | 1.00th=[ 2900], 5.00th=[ 6063], 10.00th=[ 7439], 20.00th=[ 9372], 00:19:23.533 | 30.00th=[11994], 40.00th=[14353], 50.00th=[17171], 60.00th=[19006], 00:19:23.533 | 70.00th=[20841], 80.00th=[23725], 90.00th=[35914], 95.00th=[41681], 00:19:23.533 | 99.00th=[50070], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:19:23.533 | 99.99th=[55313] 00:19:23.533 bw ( KiB/s): min=12456, max=16216, per=22.67%, avg=14336.00, stdev=2658.72, samples=2 00:19:23.533 iops : min= 3114, max= 4054, avg=3584.00, stdev=664.68, samples=2 00:19:23.533 lat (usec) : 750=0.19%, 1000=0.15% 00:19:23.533 lat (msec) : 2=0.22%, 4=0.57%, 10=12.89%, 20=57.76%, 50=27.35% 00:19:23.533 lat (msec) : 100=0.89% 00:19:23.533 cpu : usr=2.19%, sys=5.18%, ctx=280, majf=0, minf=11 00:19:23.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:23.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.533 issued rwts: total=3305,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.533 job3: (groupid=0, jobs=1): err= 0: pid=3473019: Wed Jul 10 10:49:40 2024 00:19:23.533 read: IOPS=4787, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1004msec) 00:19:23.533 slat (usec): min=3, max=6494, avg=94.26, stdev=544.02 00:19:23.533 clat (usec): min=647, max=24431, avg=12545.35, stdev=2243.03 00:19:23.533 lat (usec): min=5705, max=25690, avg=12639.61, stdev=2267.86 00:19:23.533 clat percentiles (usec): 00:19:23.533 | 1.00th=[ 6587], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[11076], 00:19:23.533 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12256], 60.00th=[12649], 00:19:23.533 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14877], 95.00th=[16712], 00:19:23.533 | 99.00th=[20579], 99.50th=[21103], 99.90th=[24511], 99.95th=[24511], 00:19:23.533 | 99.99th=[24511] 00:19:23.533 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:19:23.533 slat (usec): min=3, max=14298, avg=96.83, stdev=656.45 00:19:23.533 clat (usec): min=971, max=36375, avg=13097.90, stdev=3401.75 00:19:23.533 lat (usec): min=978, max=36394, avg=13194.74, stdev=3430.72 00:19:23.533 clat percentiles (usec): 00:19:23.533 | 1.00th=[ 6456], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11469], 00:19:23.533 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:19:23.533 | 70.00th=[13173], 80.00th=[13829], 90.00th=[15139], 95.00th=[20055], 00:19:23.533 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:19:23.533 | 99.99th=[36439] 00:19:23.533 bw ( KiB/s): min=20480, max=20480, per=32.39%, avg=20480.00, stdev= 0.00, samples=2 00:19:23.533 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:19:23.533 lat (usec) : 750=0.01%, 1000=0.02% 00:19:23.533 lat (msec) : 10=5.74%, 20=90.99%, 50=3.23% 00:19:23.533 cpu : usr=6.78%, sys=10.97%, ctx=294, majf=0, minf=11 00:19:23.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:23.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.533 issued rwts: total=4807,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.533 00:19:23.533 Run status group 0 (all jobs): 00:19:23.533 READ: bw=57.4MiB/s (60.2MB/s), 5364KiB/s-20.6MiB/s (5493kB/s-21.6MB/s), io=57.6MiB (60.4MB), run=1003-1004msec 00:19:23.533 WRITE: bw=61.8MiB/s (64.8MB/s), 6126KiB/s-21.9MiB/s (6273kB/s-23.0MB/s), io=62.0MiB (65.0MB), run=1003-1004msec 00:19:23.533 00:19:23.533 Disk stats (read/write): 00:19:23.533 nvme0n1: ios=996/1024, merge=0/0, ticks=21469/14735, in_queue=36204, util=91.68% 00:19:23.533 nvme0n2: ios=4612/4608, merge=0/0, ticks=51178/34383, in_queue=85561, util=93.91% 00:19:23.533 nvme0n3: ios=2713/3072, merge=0/0, ticks=33330/46426, in_queue=79756, util=99.58% 00:19:23.533 nvme0n4: ios=4118/4299, merge=0/0, ticks=25438/27528, in_queue=52966, util=97.69% 00:19:23.533 10:49:40 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:23.533 [global] 00:19:23.533 thread=1 00:19:23.533 invalidate=1 00:19:23.533 rw=randwrite 00:19:23.533 time_based=1 00:19:23.533 runtime=1 00:19:23.533 ioengine=libaio 00:19:23.533 direct=1 00:19:23.533 bs=4096 00:19:23.533 iodepth=128 00:19:23.533 norandommap=0 00:19:23.533 numjobs=1 00:19:23.534 00:19:23.534 verify_dump=1 00:19:23.534 verify_backlog=512 00:19:23.534 verify_state_save=0 00:19:23.534 do_verify=1 00:19:23.534 verify=crc32c-intel 00:19:23.534 [job0] 00:19:23.534 filename=/dev/nvme0n1 00:19:23.534 [job1] 00:19:23.534 filename=/dev/nvme0n2 00:19:23.534 [job2] 00:19:23.534 filename=/dev/nvme0n3 00:19:23.534 [job3] 00:19:23.534 filename=/dev/nvme0n4 00:19:23.534 Could not set queue depth (nvme0n1) 00:19:23.534 Could not set queue depth (nvme0n2) 00:19:23.534 Could not set queue depth (nvme0n3) 00:19:23.534 Could not set queue depth (nvme0n4) 00:19:23.792 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.792 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.792 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.792 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.792 fio-3.35 00:19:23.792 Starting 4 threads 00:19:25.162 00:19:25.162 job0: (groupid=0, jobs=1): err= 0: pid=3473247: Wed Jul 10 10:49:41 2024 00:19:25.162 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:19:25.162 slat (usec): min=2, max=35810, avg=117.13, stdev=1060.90 00:19:25.162 clat (usec): min=3561, max=76605, avg=15963.19, stdev=10162.08 00:19:25.162 lat (usec): min=3577, max=89605, avg=16080.32, stdev=10250.00 00:19:25.162 clat percentiles (usec): 00:19:25.162 | 1.00th=[ 6456], 5.00th=[ 7701], 10.00th=[ 8291], 20.00th=[10159], 00:19:25.162 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12911], 60.00th=[13566], 00:19:25.162 | 70.00th=[15008], 80.00th=[17433], 90.00th=[32375], 95.00th=[40109], 00:19:25.162 | 99.00th=[53216], 99.50th=[66323], 99.90th=[73925], 99.95th=[73925], 00:19:25.162 | 99.99th=[77071] 00:19:25.162 write: IOPS=4193, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1009msec); 0 zone resets 00:19:25.162 slat (usec): min=3, max=13357, avg=106.92, stdev=746.58 00:19:25.162 clat (usec): min=663, max=76335, avg=14826.91, stdev=12006.03 00:19:25.162 lat (usec): min=670, max=76354, avg=14933.83, stdev=12065.36 00:19:25.162 clat percentiles (usec): 00:19:25.162 | 1.00th=[ 3425], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 8848], 00:19:25.162 | 30.00th=[ 9634], 40.00th=[11207], 50.00th=[11731], 60.00th=[12911], 00:19:25.162 | 70.00th=[13829], 80.00th=[15664], 90.00th=[22414], 95.00th=[46924], 00:19:25.162 | 99.00th=[74974], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:19:25.162 | 99.99th=[76022] 00:19:25.162 bw ( KiB/s): min=11272, max=21560, per=25.95%, avg=16416.00, stdev=7274.71, samples=2 00:19:25.162 iops : min= 2818, max= 5390, avg=4104.00, stdev=1818.68, samples=2 00:19:25.162 lat (usec) : 750=0.02% 00:19:25.162 lat (msec) : 2=0.10%, 4=0.95%, 10=25.22%, 20=59.97%, 50=10.88% 00:19:25.162 lat (msec) : 100=2.86% 00:19:25.162 cpu : usr=4.96%, sys=6.65%, ctx=312, majf=0, minf=11 00:19:25.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:25.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.162 issued rwts: total=4096,4231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.162 job1: (groupid=0, jobs=1): err= 0: pid=3473248: Wed Jul 10 10:49:41 2024 00:19:25.162 read: IOPS=3506, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1007msec) 00:19:25.162 slat (usec): min=2, max=21693, avg=150.74, stdev=1221.64 00:19:25.162 clat (usec): min=2759, max=77806, avg=19838.08, stdev=15715.82 00:19:25.162 lat (usec): min=5090, max=77811, avg=19988.82, stdev=15799.42 00:19:25.162 clat percentiles (usec): 00:19:25.162 | 1.00th=[ 5473], 5.00th=[ 7242], 10.00th=[ 9503], 20.00th=[10552], 00:19:25.162 | 30.00th=[11994], 40.00th=[13173], 50.00th=[13698], 60.00th=[16319], 00:19:25.162 | 70.00th=[18744], 80.00th=[21365], 90.00th=[45876], 95.00th=[63177], 00:19:25.162 | 99.00th=[72877], 99.50th=[73925], 99.90th=[78119], 99.95th=[78119], 00:19:25.162 | 99.99th=[78119] 00:19:25.162 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:19:25.162 slat (usec): min=3, max=16598, avg=110.50, stdev=842.67 00:19:25.162 clat (usec): min=1517, max=61357, avg=15976.14, stdev=11489.44 00:19:25.162 lat (usec): min=1525, max=61374, avg=16086.64, stdev=11537.45 00:19:25.162 clat percentiles (usec): 00:19:25.162 | 1.00th=[ 3163], 5.00th=[ 5932], 10.00th=[ 6849], 20.00th=[ 8717], 00:19:25.162 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[11731], 60.00th=[13829], 00:19:25.162 | 70.00th=[14877], 80.00th=[21103], 90.00th=[34866], 95.00th=[43779], 00:19:25.162 | 99.00th=[54264], 99.50th=[58459], 99.90th=[61080], 99.95th=[61604], 00:19:25.162 | 99.99th=[61604] 00:19:25.162 bw ( KiB/s): min=12288, max=16351, per=22.63%, avg=14319.50, stdev=2872.97, samples=2 00:19:25.162 iops : min= 3072, max= 4087, avg=3579.50, stdev=717.71, samples=2 00:19:25.162 lat (msec) : 2=0.20%, 4=0.52%, 10=25.40%, 20=50.39%, 50=17.84% 00:19:25.162 lat (msec) : 100=5.66% 00:19:25.162 cpu : usr=3.68%, sys=6.26%, ctx=279, majf=0, minf=13 00:19:25.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:25.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.162 issued rwts: total=3531,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.162 job2: (groupid=0, jobs=1): err= 0: pid=3473267: Wed Jul 10 10:49:41 2024 00:19:25.162 read: IOPS=3602, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1012msec) 00:19:25.162 slat (usec): min=2, max=20284, avg=122.88, stdev=1025.37 00:19:25.162 clat (usec): min=2302, max=49923, avg=16830.25, stdev=8189.79 00:19:25.162 lat (usec): min=2331, max=51883, avg=16953.13, stdev=8258.87 00:19:25.162 clat percentiles (usec): 00:19:25.162 | 1.00th=[ 6456], 5.00th=[ 8160], 10.00th=[ 9634], 20.00th=[11207], 00:19:25.162 | 30.00th=[11863], 40.00th=[12780], 50.00th=[14222], 60.00th=[15270], 00:19:25.162 | 70.00th=[17433], 80.00th=[23200], 90.00th=[28967], 95.00th=[34341], 00:19:25.162 | 99.00th=[41157], 99.50th=[43779], 99.90th=[43779], 99.95th=[47973], 00:19:25.162 | 99.99th=[50070] 00:19:25.162 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec); 0 zone resets 00:19:25.162 slat (usec): min=3, max=19964, avg=111.56, stdev=927.77 00:19:25.162 clat (usec): min=1235, max=52909, avg=16165.27, stdev=7901.90 00:19:25.162 lat (usec): min=1248, max=52946, avg=16276.84, stdev=7972.72 00:19:25.162 clat percentiles (usec): 00:19:25.162 | 1.00th=[ 3687], 5.00th=[ 6849], 10.00th=[ 7963], 20.00th=[10814], 00:19:25.162 | 30.00th=[11338], 40.00th=[11994], 50.00th=[13042], 60.00th=[15795], 00:19:25.162 | 70.00th=[18744], 80.00th=[22938], 90.00th=[26084], 95.00th=[33162], 00:19:25.162 | 99.00th=[38536], 99.50th=[40633], 99.90th=[40633], 99.95th=[43779], 00:19:25.162 | 99.99th=[52691] 00:19:25.163 bw ( KiB/s): min=13429, max=18784, per=25.46%, avg=16106.50, stdev=3786.56, samples=2 00:19:25.163 iops : min= 3357, max= 4696, avg=4026.50, stdev=946.82, samples=2 00:19:25.163 lat (msec) : 2=0.23%, 4=0.72%, 10=13.37%, 20=60.66%, 50=25.01% 00:19:25.163 lat (msec) : 100=0.01% 00:19:25.163 cpu : usr=4.65%, sys=7.81%, ctx=314, majf=0, minf=13 00:19:25.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:25.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.163 issued rwts: total=3646,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.163 job3: (groupid=0, jobs=1): err= 0: pid=3473272: Wed Jul 10 10:49:41 2024 00:19:25.163 read: IOPS=3810, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1007msec) 00:19:25.163 slat (usec): min=2, max=21489, avg=117.81, stdev=1024.68 00:19:25.163 clat (usec): min=4192, max=55940, avg=16481.55, stdev=7431.60 00:19:25.163 lat (usec): min=5640, max=55949, avg=16599.36, stdev=7484.06 00:19:25.163 clat percentiles (usec): 00:19:25.163 | 1.00th=[ 5866], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[11994], 00:19:25.163 | 30.00th=[13042], 40.00th=[13829], 50.00th=[14484], 60.00th=[15795], 00:19:25.163 | 70.00th=[17171], 80.00th=[20579], 90.00th=[24249], 95.00th=[28443], 00:19:25.163 | 99.00th=[54264], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:19:25.163 | 99.99th=[55837] 00:19:25.163 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:19:25.163 slat (usec): min=3, max=15581, avg=112.21, stdev=866.32 00:19:25.163 clat (usec): min=1171, max=89237, avg=15639.94, stdev=12610.04 00:19:25.163 lat (usec): min=1565, max=89250, avg=15752.16, stdev=12676.73 00:19:25.163 clat percentiles (usec): 00:19:25.163 | 1.00th=[ 5014], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 8717], 00:19:25.163 | 30.00th=[10552], 40.00th=[13173], 50.00th=[13829], 60.00th=[14091], 00:19:25.163 | 70.00th=[14746], 80.00th=[15795], 90.00th=[22938], 95.00th=[34866], 00:19:25.163 | 99.00th=[81265], 99.50th=[84411], 99.90th=[89654], 99.95th=[89654], 00:19:25.163 | 99.99th=[89654] 00:19:25.163 bw ( KiB/s): min=12263, max=20480, per=25.88%, avg=16371.50, stdev=5810.30, samples=2 00:19:25.163 iops : min= 3065, max= 5120, avg=4092.50, stdev=1453.10, samples=2 00:19:25.163 lat (msec) : 2=0.09%, 4=0.28%, 10=19.68%, 20=62.36%, 50=15.11% 00:19:25.163 lat (msec) : 100=2.48% 00:19:25.163 cpu : usr=3.88%, sys=6.06%, ctx=269, majf=0, minf=13 00:19:25.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:25.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.163 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.163 00:19:25.163 Run status group 0 (all jobs): 00:19:25.163 READ: bw=58.3MiB/s (61.2MB/s), 13.7MiB/s-15.9MiB/s (14.4MB/s-16.6MB/s), io=59.0MiB (61.9MB), run=1007-1012msec 00:19:25.163 WRITE: bw=61.8MiB/s (64.8MB/s), 13.9MiB/s-16.4MiB/s (14.6MB/s-17.2MB/s), io=62.5MiB (65.6MB), run=1007-1012msec 00:19:25.163 00:19:25.163 Disk stats (read/write): 00:19:25.163 nvme0n1: ios=3867/4096, merge=0/0, ticks=39205/40154, in_queue=79359, util=85.87% 00:19:25.163 nvme0n2: ios=3121/3095, merge=0/0, ticks=37248/26592, in_queue=63840, util=92.89% 00:19:25.163 nvme0n3: ios=3124/3270, merge=0/0, ticks=38340/31233, in_queue=69573, util=97.07% 00:19:25.163 nvme0n4: ios=3118/3079, merge=0/0, ticks=41596/42665, in_queue=84261, util=96.94% 00:19:25.163 10:49:41 -- target/fio.sh@55 -- # sync 00:19:25.163 10:49:41 -- target/fio.sh@59 -- # fio_pid=3473462 00:19:25.163 10:49:41 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:25.163 10:49:41 -- target/fio.sh@61 -- # sleep 3 00:19:25.163 [global] 00:19:25.163 thread=1 00:19:25.163 invalidate=1 00:19:25.163 rw=read 00:19:25.163 time_based=1 00:19:25.163 runtime=10 00:19:25.163 ioengine=libaio 00:19:25.163 direct=1 00:19:25.163 bs=4096 00:19:25.163 iodepth=1 00:19:25.163 norandommap=1 00:19:25.163 numjobs=1 00:19:25.163 00:19:25.163 [job0] 00:19:25.163 filename=/dev/nvme0n1 00:19:25.163 [job1] 00:19:25.163 filename=/dev/nvme0n2 00:19:25.163 [job2] 00:19:25.163 filename=/dev/nvme0n3 00:19:25.163 [job3] 00:19:25.163 filename=/dev/nvme0n4 00:19:25.163 Could not set queue depth (nvme0n1) 00:19:25.163 Could not set queue depth (nvme0n2) 00:19:25.163 Could not set queue depth (nvme0n3) 00:19:25.163 Could not set queue depth (nvme0n4) 00:19:25.163 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.163 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.163 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.163 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.163 fio-3.35 00:19:25.163 Starting 4 threads 00:19:28.440 10:49:44 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:28.440 10:49:44 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:28.440 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=331776, buflen=4096 00:19:28.440 fio: pid=3473611, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:28.440 10:49:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:28.440 10:49:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:28.440 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=315392, buflen=4096 00:19:28.440 fio: pid=3473610, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:28.698 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=42004480, buflen=4096 00:19:28.698 fio: pid=3473608, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:28.698 10:49:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:28.698 10:49:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:28.956 10:49:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:28.956 10:49:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:28.956 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=28708864, buflen=4096 00:19:28.956 fio: pid=3473609, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:28.956 00:19:28.956 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3473608: Wed Jul 10 10:49:45 2024 00:19:28.956 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(40.1MiB/3389msec) 00:19:28.956 slat (usec): min=5, max=25711, avg=15.69, stdev=309.32 00:19:28.956 clat (usec): min=247, max=20363, avg=311.17, stdev=206.13 00:19:28.956 lat (usec): min=252, max=26064, avg=326.86, stdev=373.05 00:19:28.956 clat percentiles (usec): 00:19:28.956 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:19:28.956 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:19:28.956 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 437], 00:19:28.956 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 791], 99.95th=[ 922], 00:19:28.956 | 99.99th=[ 1045] 00:19:28.956 bw ( KiB/s): min= 9664, max=13480, per=63.88%, avg=12060.00, stdev=1334.70, samples=6 00:19:28.956 iops : min= 2416, max= 3370, avg=3015.00, stdev=333.67, samples=6 00:19:28.956 lat (usec) : 250=0.02%, 500=97.05%, 750=2.80%, 1000=0.11% 00:19:28.956 lat (msec) : 2=0.01%, 50=0.01% 00:19:28.956 cpu : usr=2.39%, sys=4.60%, ctx=10263, majf=0, minf=1 00:19:28.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.956 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.956 issued rwts: total=10256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.956 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3473609: Wed Jul 10 10:49:45 2024 00:19:28.956 read: IOPS=1899, BW=7596KiB/s (7778kB/s)(27.4MiB/3691msec) 00:19:28.956 slat (usec): min=4, max=30875, avg=19.21, stdev=393.29 00:19:28.956 clat (usec): min=249, max=42374, avg=500.89, stdev=2697.61 00:19:28.956 lat (usec): min=254, max=72983, avg=520.10, stdev=2823.45 00:19:28.956 clat percentiles (usec): 00:19:28.956 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 281], 00:19:28.956 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 330], 00:19:28.956 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 379], 95.00th=[ 445], 00:19:28.956 | 99.00th=[ 506], 99.50th=[ 594], 99.90th=[42206], 99.95th=[42206], 00:19:28.956 | 99.99th=[42206] 00:19:28.956 bw ( KiB/s): min= 133, max=13080, per=42.39%, avg=8004.14, stdev=4770.01, samples=7 00:19:28.956 iops : min= 33, max= 3270, avg=2001.00, stdev=1192.57, samples=7 00:19:28.956 lat (usec) : 250=0.03%, 500=98.66%, 750=0.84%, 1000=0.01% 00:19:28.956 lat (msec) : 4=0.01%, 50=0.43% 00:19:28.956 cpu : usr=1.22%, sys=3.33%, ctx=7013, majf=0, minf=1 00:19:28.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.956 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.956 issued rwts: total=7010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.956 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3473610: Wed Jul 10 10:49:45 2024 00:19:28.956 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(308KiB/3190msec) 00:19:28.956 slat (usec): min=13, max=10869, avg=158.12, stdev=1228.54 00:19:28.956 clat (usec): min=400, max=42283, avg=40971.48, stdev=4711.46 00:19:28.956 lat (usec): min=418, max=52968, avg=41131.47, stdev=4904.12 00:19:28.956 clat percentiles (usec): 00:19:28.957 | 1.00th=[ 400], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:28.957 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:19:28.957 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:28.957 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:28.957 | 99.99th=[42206] 00:19:28.957 bw ( KiB/s): min= 88, max= 104, per=0.51%, avg=97.33, stdev= 6.02, samples=6 00:19:28.957 iops : min= 22, max= 26, avg=24.33, stdev= 1.51, samples=6 00:19:28.957 lat (usec) : 500=1.28% 00:19:28.957 lat (msec) : 50=97.44% 00:19:28.957 cpu : usr=0.06%, sys=0.00%, ctx=82, majf=0, minf=1 00:19:28.957 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.957 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.957 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.957 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3473611: Wed Jul 10 10:49:45 2024 00:19:28.957 read: IOPS=28, BW=112KiB/s (115kB/s)(324KiB/2893msec) 00:19:28.957 slat (nsec): min=9222, max=53020, avg=19519.11, stdev=7609.82 00:19:28.957 clat (usec): min=346, max=42134, avg=35415.43, stdev=14686.35 00:19:28.957 lat (usec): min=362, max=42163, avg=35434.96, stdev=14685.61 00:19:28.957 clat percentiles (usec): 00:19:28.957 | 1.00th=[ 347], 5.00th=[ 412], 10.00th=[ 445], 20.00th=[41157], 00:19:28.957 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:19:28.957 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:28.957 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:28.957 | 99.99th=[42206] 00:19:28.957 bw ( KiB/s): min= 88, max= 152, per=0.57%, avg=107.20, stdev=25.67, samples=5 00:19:28.957 iops : min= 22, max= 38, avg=26.80, stdev= 6.42, samples=5 00:19:28.957 lat (usec) : 500=10.98%, 750=3.66% 00:19:28.957 lat (msec) : 50=84.15% 00:19:28.957 cpu : usr=0.07%, sys=0.00%, ctx=82, majf=0, minf=1 00:19:28.957 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.957 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.957 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.957 00:19:28.957 Run status group 0 (all jobs): 00:19:28.957 READ: bw=18.4MiB/s (19.3MB/s), 96.6KiB/s-11.8MiB/s (98.9kB/s-12.4MB/s), io=68.1MiB (71.4MB), run=2893-3691msec 00:19:28.957 00:19:28.957 Disk stats (read/write): 00:19:28.957 nvme0n1: ios=10163/0, merge=0/0, ticks=3041/0, in_queue=3041, util=94.42% 00:19:28.957 nvme0n2: ios=7007/0, merge=0/0, ticks=3358/0, in_queue=3358, util=95.39% 00:19:28.957 nvme0n3: ios=108/0, merge=0/0, ticks=3722/0, in_queue=3722, util=99.10% 00:19:28.957 nvme0n4: ios=80/0, merge=0/0, ticks=2828/0, in_queue=2828, util=96.75% 00:19:29.214 10:49:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.214 10:49:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:29.472 10:49:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.472 10:49:46 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:29.729 10:49:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.729 10:49:46 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:29.987 10:49:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.987 10:49:46 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:30.244 10:49:46 -- target/fio.sh@69 -- # fio_status=0 00:19:30.244 10:49:46 -- target/fio.sh@70 -- # wait 3473462 00:19:30.244 10:49:46 -- target/fio.sh@70 -- # fio_status=4 00:19:30.244 10:49:46 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:30.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:30.244 10:49:47 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:30.244 10:49:47 -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.244 10:49:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:30.244 10:49:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.244 10:49:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:30.244 10:49:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.244 10:49:47 -- common/autotest_common.sh@1210 -- # return 0 00:19:30.244 10:49:47 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:30.244 10:49:47 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:30.244 nvmf hotplug test: fio failed as expected 00:19:30.244 10:49:47 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.500 10:49:47 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:30.500 10:49:47 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:30.500 10:49:47 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:30.500 10:49:47 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:30.500 10:49:47 -- target/fio.sh@91 -- # nvmftestfini 00:19:30.500 10:49:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:30.500 10:49:47 -- nvmf/common.sh@116 -- # sync 00:19:30.500 10:49:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:30.500 10:49:47 -- nvmf/common.sh@119 -- # set +e 00:19:30.500 10:49:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:30.500 10:49:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:30.500 rmmod nvme_tcp 00:19:30.500 rmmod nvme_fabrics 00:19:30.500 rmmod nvme_keyring 00:19:30.757 10:49:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:30.757 10:49:47 -- nvmf/common.sh@123 -- # set -e 00:19:30.757 10:49:47 -- nvmf/common.sh@124 -- # return 0 00:19:30.757 10:49:47 -- nvmf/common.sh@477 -- # '[' -n 3471429 ']' 00:19:30.757 10:49:47 -- nvmf/common.sh@478 -- # killprocess 3471429 00:19:30.757 10:49:47 -- common/autotest_common.sh@926 -- # '[' -z 3471429 ']' 00:19:30.757 10:49:47 -- common/autotest_common.sh@930 -- # kill -0 3471429 00:19:30.757 10:49:47 -- common/autotest_common.sh@931 -- # uname 00:19:30.757 10:49:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:30.757 10:49:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3471429 00:19:30.757 10:49:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:30.757 10:49:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:30.757 10:49:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3471429' 00:19:30.757 killing process with pid 3471429 00:19:30.757 10:49:47 -- common/autotest_common.sh@945 -- # kill 3471429 00:19:30.757 10:49:47 -- common/autotest_common.sh@950 -- # wait 3471429 00:19:31.016 10:49:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:31.016 10:49:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:31.016 10:49:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:31.016 10:49:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.016 10:49:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:31.016 10:49:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.016 10:49:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.016 10:49:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.945 10:49:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:32.945 00:19:32.945 real 0m23.789s 00:19:32.945 user 1m22.992s 00:19:32.945 sys 0m6.958s 00:19:32.945 10:49:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:32.945 10:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:32.945 ************************************ 00:19:32.945 END TEST nvmf_fio_target 00:19:32.945 ************************************ 00:19:32.945 10:49:49 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:32.945 10:49:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:32.945 10:49:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:32.945 10:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:32.945 ************************************ 00:19:32.945 START TEST nvmf_bdevio 00:19:32.946 ************************************ 00:19:32.946 10:49:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:32.946 * Looking for test storage... 00:19:32.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.946 10:49:49 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.946 10:49:49 -- nvmf/common.sh@7 -- # uname -s 00:19:32.946 10:49:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.946 10:49:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.946 10:49:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.946 10:49:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.946 10:49:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.946 10:49:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.946 10:49:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.946 10:49:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.946 10:49:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.946 10:49:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.946 10:49:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.946 10:49:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.946 10:49:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.946 10:49:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.946 10:49:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.946 10:49:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.946 10:49:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.946 10:49:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.946 10:49:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.946 10:49:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.946 10:49:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.946 10:49:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.946 10:49:49 -- paths/export.sh@5 -- # export PATH 00:19:32.946 10:49:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.946 10:49:49 -- nvmf/common.sh@46 -- # : 0 00:19:32.946 10:49:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:32.946 10:49:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:32.946 10:49:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:32.946 10:49:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.946 10:49:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.946 10:49:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:32.946 10:49:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:32.946 10:49:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:32.946 10:49:49 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.946 10:49:49 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.946 10:49:49 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:32.946 10:49:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:32.946 10:49:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.946 10:49:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:32.946 10:49:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:32.946 10:49:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:32.946 10:49:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.946 10:49:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.946 10:49:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.946 10:49:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:32.946 10:49:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:32.946 10:49:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:32.946 10:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:34.845 10:49:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:34.845 10:49:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:34.845 10:49:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:34.845 10:49:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:34.845 10:49:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:34.845 10:49:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:34.845 10:49:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:34.845 10:49:51 -- nvmf/common.sh@294 -- # net_devs=() 00:19:34.845 10:49:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:34.845 10:49:51 -- nvmf/common.sh@295 -- # e810=() 00:19:34.845 10:49:51 -- nvmf/common.sh@295 -- # local -ga e810 00:19:34.845 10:49:51 -- nvmf/common.sh@296 -- # x722=() 00:19:34.845 10:49:51 -- nvmf/common.sh@296 -- # local -ga x722 00:19:34.845 10:49:51 -- nvmf/common.sh@297 -- # mlx=() 00:19:34.845 10:49:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:34.845 10:49:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.845 10:49:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.845 10:49:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.845 10:49:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.845 10:49:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.845 10:49:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.845 10:49:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.845 10:49:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.845 10:49:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.845 10:49:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.845 10:49:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.845 10:49:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:34.845 10:49:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:34.845 10:49:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:34.845 10:49:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:34.845 10:49:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:34.845 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:34.845 10:49:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:34.845 10:49:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:34.845 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:34.845 10:49:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:34.845 10:49:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:34.845 10:49:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.845 10:49:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:34.845 10:49:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.845 10:49:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:34.845 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:34.845 10:49:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.845 10:49:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:34.845 10:49:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.845 10:49:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:34.845 10:49:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.845 10:49:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:34.845 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:34.845 10:49:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.845 10:49:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:34.845 10:49:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:34.845 10:49:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:34.845 10:49:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:34.845 10:49:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.845 10:49:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.845 10:49:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.845 10:49:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:34.845 10:49:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:34.845 10:49:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:34.845 10:49:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:34.845 10:49:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:34.845 10:49:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.845 10:49:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:34.845 10:49:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:34.845 10:49:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:34.845 10:49:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.104 10:49:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.104 10:49:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.104 10:49:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:35.104 10:49:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.104 10:49:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.104 10:49:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.104 10:49:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:35.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:19:35.104 00:19:35.104 --- 10.0.0.2 ping statistics --- 00:19:35.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.104 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:19:35.104 10:49:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:19:35.104 00:19:35.104 --- 10.0.0.1 ping statistics --- 00:19:35.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.104 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:19:35.104 10:49:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.104 10:49:51 -- nvmf/common.sh@410 -- # return 0 00:19:35.104 10:49:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:35.104 10:49:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.104 10:49:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:35.104 10:49:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:35.104 10:49:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.104 10:49:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:35.104 10:49:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:35.104 10:49:51 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:35.104 10:49:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:35.104 10:49:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:35.104 10:49:51 -- common/autotest_common.sh@10 -- # set +x 00:19:35.104 10:49:51 -- nvmf/common.sh@469 -- # nvmfpid=3476156 00:19:35.104 10:49:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:35.104 10:49:51 -- nvmf/common.sh@470 -- # waitforlisten 3476156 00:19:35.104 10:49:51 -- common/autotest_common.sh@819 -- # '[' -z 3476156 ']' 00:19:35.104 10:49:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.104 10:49:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:35.104 10:49:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.104 10:49:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:35.104 10:49:51 -- common/autotest_common.sh@10 -- # set +x 00:19:35.104 [2024-07-10 10:49:51.814985] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:35.104 [2024-07-10 10:49:51.815068] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.104 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.104 [2024-07-10 10:49:51.891345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.362 [2024-07-10 10:49:51.987943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:35.362 [2024-07-10 10:49:51.988115] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.362 [2024-07-10 10:49:51.988136] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.362 [2024-07-10 10:49:51.988150] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.362 [2024-07-10 10:49:51.988239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:35.362 [2024-07-10 10:49:51.988299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:35.362 [2024-07-10 10:49:51.988354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:35.362 [2024-07-10 10:49:51.988357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:36.297 10:49:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:36.297 10:49:52 -- common/autotest_common.sh@852 -- # return 0 00:19:36.297 10:49:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:36.297 10:49:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:36.297 10:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.297 10:49:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.297 10:49:52 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.297 10:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.297 10:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.297 [2024-07-10 10:49:52.849080] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.297 10:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.297 10:49:52 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:36.297 10:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.297 10:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.297 Malloc0 00:19:36.297 10:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.297 10:49:52 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:36.297 10:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.297 10:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.297 10:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.297 10:49:52 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:36.297 10:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.297 10:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.297 10:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.297 10:49:52 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.297 10:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.297 10:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.297 [2024-07-10 10:49:52.900276] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.297 10:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.297 10:49:52 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:36.297 10:49:52 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:36.297 10:49:52 -- nvmf/common.sh@520 -- # config=() 00:19:36.297 10:49:52 -- nvmf/common.sh@520 -- # local subsystem config 00:19:36.297 10:49:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:36.297 10:49:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:36.297 { 00:19:36.297 "params": { 00:19:36.297 "name": "Nvme$subsystem", 00:19:36.297 "trtype": "$TEST_TRANSPORT", 00:19:36.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.297 "adrfam": "ipv4", 00:19:36.297 "trsvcid": "$NVMF_PORT", 00:19:36.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.297 "hdgst": ${hdgst:-false}, 00:19:36.297 "ddgst": ${ddgst:-false} 00:19:36.297 }, 00:19:36.298 "method": "bdev_nvme_attach_controller" 00:19:36.298 } 00:19:36.298 EOF 00:19:36.298 )") 00:19:36.298 10:49:52 -- nvmf/common.sh@542 -- # cat 00:19:36.298 10:49:52 -- nvmf/common.sh@544 -- # jq . 00:19:36.298 10:49:52 -- nvmf/common.sh@545 -- # IFS=, 00:19:36.298 10:49:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:36.298 "params": { 00:19:36.298 "name": "Nvme1", 00:19:36.298 "trtype": "tcp", 00:19:36.298 "traddr": "10.0.0.2", 00:19:36.298 "adrfam": "ipv4", 00:19:36.298 "trsvcid": "4420", 00:19:36.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.298 "hdgst": false, 00:19:36.298 "ddgst": false 00:19:36.298 }, 00:19:36.298 "method": "bdev_nvme_attach_controller" 00:19:36.298 }' 00:19:36.298 [2024-07-10 10:49:52.942514] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:36.298 [2024-07-10 10:49:52.942594] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476326 ] 00:19:36.298 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.298 [2024-07-10 10:49:53.007682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:36.298 [2024-07-10 10:49:53.093857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.298 [2024-07-10 10:49:53.093910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.298 [2024-07-10 10:49:53.093913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.863 [2024-07-10 10:49:53.423411] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:36.863 [2024-07-10 10:49:53.423463] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:36.863 I/O targets: 00:19:36.863 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:36.863 00:19:36.863 00:19:36.863 CUnit - A unit testing framework for C - Version 2.1-3 00:19:36.863 http://cunit.sourceforge.net/ 00:19:36.863 00:19:36.863 00:19:36.863 Suite: bdevio tests on: Nvme1n1 00:19:36.863 Test: blockdev write read block ...passed 00:19:36.863 Test: blockdev write zeroes read block ...passed 00:19:36.863 Test: blockdev write zeroes read no split ...passed 00:19:36.863 Test: blockdev write zeroes read split ...passed 00:19:36.863 Test: blockdev write zeroes read split partial ...passed 00:19:36.863 Test: blockdev reset ...[2024-07-10 10:49:53.630283] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:36.863 [2024-07-10 10:49:53.630383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d69860 (9): Bad file descriptor 00:19:36.863 [2024-07-10 10:49:53.682784] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:36.863 passed 00:19:36.863 Test: blockdev write read 8 blocks ...passed 00:19:36.863 Test: blockdev write read size > 128k ...passed 00:19:36.863 Test: blockdev write read invalid size ...passed 00:19:37.135 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:37.135 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:37.135 Test: blockdev write read max offset ...passed 00:19:37.135 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:37.135 Test: blockdev writev readv 8 blocks ...passed 00:19:37.135 Test: blockdev writev readv 30 x 1block ...passed 00:19:37.135 Test: blockdev writev readv block ...passed 00:19:37.135 Test: blockdev writev readv size > 128k ...passed 00:19:37.135 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:37.135 Test: blockdev comparev and writev ...[2024-07-10 10:49:53.858643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.135 [2024-07-10 10:49:53.858680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.136 [2024-07-10 10:49:53.858704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-07-10 10:49:53.858720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:37.136 [2024-07-10 10:49:53.859082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-07-10 10:49:53.859108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:37.136 [2024-07-10 10:49:53.859129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-07-10 10:49:53.859145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:37.136 [2024-07-10 10:49:53.859522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-07-10 10:49:53.859546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:37.136 [2024-07-10 10:49:53.859568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-07-10 10:49:53.859584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:37.136 [2024-07-10 10:49:53.859951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-07-10 10:49:53.859975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:37.136 [2024-07-10 10:49:53.860004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.136 [2024-07-10 10:49:53.860020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:37.136 passed 00:19:37.136 Test: blockdev nvme passthru rw ...passed 00:19:37.136 Test: blockdev nvme passthru vendor specific ...[2024-07-10 10:49:53.942822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.136 [2024-07-10 10:49:53.942848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:37.136 [2024-07-10 10:49:53.943026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.136 [2024-07-10 10:49:53.943048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:37.136 [2024-07-10 10:49:53.943225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.136 [2024-07-10 10:49:53.943248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:37.136 [2024-07-10 10:49:53.943422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.136 [2024-07-10 10:49:53.943453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:37.136 passed 00:19:37.420 Test: blockdev nvme admin passthru ...passed 00:19:37.420 Test: blockdev copy ...passed 00:19:37.420 00:19:37.420 Run Summary: Type Total Ran Passed Failed Inactive 00:19:37.420 suites 1 1 n/a 0 0 00:19:37.420 tests 23 23 23 0 0 00:19:37.420 asserts 152 152 152 0 n/a 00:19:37.420 00:19:37.420 Elapsed time = 1.152 seconds 00:19:37.420 10:49:54 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:37.420 10:49:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.420 10:49:54 -- common/autotest_common.sh@10 -- # set +x 00:19:37.420 10:49:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.420 10:49:54 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:37.420 10:49:54 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:37.420 10:49:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:37.420 10:49:54 -- nvmf/common.sh@116 -- # sync 00:19:37.420 10:49:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:37.420 10:49:54 -- nvmf/common.sh@119 -- # set +e 00:19:37.420 10:49:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:37.420 10:49:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:37.420 rmmod nvme_tcp 00:19:37.420 rmmod nvme_fabrics 00:19:37.681 rmmod nvme_keyring 00:19:37.681 10:49:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:37.681 10:49:54 -- nvmf/common.sh@123 -- # set -e 00:19:37.681 10:49:54 -- nvmf/common.sh@124 -- # return 0 00:19:37.681 10:49:54 -- nvmf/common.sh@477 -- # '[' -n 3476156 ']' 00:19:37.681 10:49:54 -- nvmf/common.sh@478 -- # killprocess 3476156 00:19:37.681 10:49:54 -- common/autotest_common.sh@926 -- # '[' -z 3476156 ']' 00:19:37.681 10:49:54 -- common/autotest_common.sh@930 -- # kill -0 3476156 00:19:37.681 10:49:54 -- common/autotest_common.sh@931 -- # uname 00:19:37.681 10:49:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:37.681 10:49:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3476156 00:19:37.681 10:49:54 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:37.681 10:49:54 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:37.681 10:49:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3476156' 00:19:37.681 killing process with pid 3476156 00:19:37.681 10:49:54 -- common/autotest_common.sh@945 -- # kill 3476156 00:19:37.681 10:49:54 -- common/autotest_common.sh@950 -- # wait 3476156 00:19:37.940 10:49:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:37.940 10:49:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:37.940 10:49:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:37.940 10:49:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.940 10:49:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:37.940 10:49:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.941 10:49:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.941 10:49:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.846 10:49:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:39.846 00:19:39.846 real 0m6.903s 00:19:39.846 user 0m13.425s 00:19:39.846 sys 0m2.059s 00:19:39.846 10:49:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.846 10:49:56 -- common/autotest_common.sh@10 -- # set +x 00:19:39.846 ************************************ 00:19:39.846 END TEST nvmf_bdevio 00:19:39.846 ************************************ 00:19:39.846 10:49:56 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:39.846 10:49:56 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:39.846 10:49:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:39.846 10:49:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:39.846 10:49:56 -- common/autotest_common.sh@10 -- # set +x 00:19:39.846 ************************************ 00:19:39.846 START TEST nvmf_bdevio_no_huge 00:19:39.846 ************************************ 00:19:39.846 10:49:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:39.846 * Looking for test storage... 00:19:39.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.846 10:49:56 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.846 10:49:56 -- nvmf/common.sh@7 -- # uname -s 00:19:39.846 10:49:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.846 10:49:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.846 10:49:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.846 10:49:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.846 10:49:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.846 10:49:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.846 10:49:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.846 10:49:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.846 10:49:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.846 10:49:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.846 10:49:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.846 10:49:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.846 10:49:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.846 10:49:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.846 10:49:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.846 10:49:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.846 10:49:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.846 10:49:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.846 10:49:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.846 10:49:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.846 10:49:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.846 10:49:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.846 10:49:56 -- paths/export.sh@5 -- # export PATH 00:19:39.846 10:49:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.846 10:49:56 -- nvmf/common.sh@46 -- # : 0 00:19:39.846 10:49:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:39.846 10:49:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:39.846 10:49:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:39.846 10:49:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.846 10:49:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.846 10:49:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:39.846 10:49:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:39.846 10:49:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:39.846 10:49:56 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:39.846 10:49:56 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:39.846 10:49:56 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:39.846 10:49:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:39.846 10:49:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.846 10:49:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:39.846 10:49:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:39.846 10:49:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:39.846 10:49:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.846 10:49:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.846 10:49:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.104 10:49:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:40.104 10:49:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:40.104 10:49:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:40.104 10:49:56 -- common/autotest_common.sh@10 -- # set +x 00:19:42.005 10:49:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:42.005 10:49:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:42.005 10:49:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:42.005 10:49:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:42.005 10:49:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:42.005 10:49:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:42.005 10:49:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:42.005 10:49:58 -- nvmf/common.sh@294 -- # net_devs=() 00:19:42.005 10:49:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:42.005 10:49:58 -- nvmf/common.sh@295 -- # e810=() 00:19:42.005 10:49:58 -- nvmf/common.sh@295 -- # local -ga e810 00:19:42.005 10:49:58 -- nvmf/common.sh@296 -- # x722=() 00:19:42.005 10:49:58 -- nvmf/common.sh@296 -- # local -ga x722 00:19:42.005 10:49:58 -- nvmf/common.sh@297 -- # mlx=() 00:19:42.005 10:49:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:42.005 10:49:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.005 10:49:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.005 10:49:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.005 10:49:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.005 10:49:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.005 10:49:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.005 10:49:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.005 10:49:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.005 10:49:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.005 10:49:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.005 10:49:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.005 10:49:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:42.005 10:49:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:42.005 10:49:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:42.005 10:49:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:42.005 10:49:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:42.005 10:49:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:42.005 10:49:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:42.005 10:49:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:42.005 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:42.006 10:49:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:42.006 10:49:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:42.006 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:42.006 10:49:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:42.006 10:49:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:42.006 10:49:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.006 10:49:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:42.006 10:49:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.006 10:49:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:42.006 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:42.006 10:49:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.006 10:49:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:42.006 10:49:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.006 10:49:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:42.006 10:49:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.006 10:49:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:42.006 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:42.006 10:49:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.006 10:49:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:42.006 10:49:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:42.006 10:49:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:42.006 10:49:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.006 10:49:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.006 10:49:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.006 10:49:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:42.006 10:49:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.006 10:49:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.006 10:49:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:42.006 10:49:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.006 10:49:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.006 10:49:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:42.006 10:49:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:42.006 10:49:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.006 10:49:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.006 10:49:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.006 10:49:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.006 10:49:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:42.006 10:49:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.006 10:49:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.006 10:49:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.006 10:49:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:42.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:19:42.006 00:19:42.006 --- 10.0.0.2 ping statistics --- 00:19:42.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.006 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:19:42.006 10:49:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:19:42.006 00:19:42.006 --- 10.0.0.1 ping statistics --- 00:19:42.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.006 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:19:42.006 10:49:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.006 10:49:58 -- nvmf/common.sh@410 -- # return 0 00:19:42.006 10:49:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:42.006 10:49:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.006 10:49:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:42.006 10:49:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.006 10:49:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:42.006 10:49:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:42.006 10:49:58 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:42.006 10:49:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:42.006 10:49:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:42.006 10:49:58 -- common/autotest_common.sh@10 -- # set +x 00:19:42.006 10:49:58 -- nvmf/common.sh@469 -- # nvmfpid=3478510 00:19:42.006 10:49:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:42.006 10:49:58 -- nvmf/common.sh@470 -- # waitforlisten 3478510 00:19:42.006 10:49:58 -- common/autotest_common.sh@819 -- # '[' -z 3478510 ']' 00:19:42.006 10:49:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.006 10:49:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:42.006 10:49:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.006 10:49:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:42.006 10:49:58 -- common/autotest_common.sh@10 -- # set +x 00:19:42.264 [2024-07-10 10:49:58.835504] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:42.264 [2024-07-10 10:49:58.835586] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:42.264 [2024-07-10 10:49:58.905567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.264 [2024-07-10 10:49:58.992990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:42.264 [2024-07-10 10:49:58.993154] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.264 [2024-07-10 10:49:58.993174] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.264 [2024-07-10 10:49:58.993189] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.264 [2024-07-10 10:49:58.993293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:42.264 [2024-07-10 10:49:58.993335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:42.264 [2024-07-10 10:49:58.993388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:42.264 [2024-07-10 10:49:58.993390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.197 10:49:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:43.197 10:49:59 -- common/autotest_common.sh@852 -- # return 0 00:19:43.197 10:49:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:43.197 10:49:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:43.197 10:49:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.197 10:49:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.197 10:49:59 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:43.197 10:49:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.197 10:49:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.197 [2024-07-10 10:49:59.784597] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.197 10:49:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.197 10:49:59 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:43.197 10:49:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.197 10:49:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.197 Malloc0 00:19:43.197 10:49:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.197 10:49:59 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:43.197 10:49:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.197 10:49:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.197 10:49:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.197 10:49:59 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:43.197 10:49:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.197 10:49:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.197 10:49:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.197 10:49:59 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.197 10:49:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.197 10:49:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.197 [2024-07-10 10:49:59.822434] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.197 10:49:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.197 10:49:59 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:43.197 10:49:59 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:43.197 10:49:59 -- nvmf/common.sh@520 -- # config=() 00:19:43.197 10:49:59 -- nvmf/common.sh@520 -- # local subsystem config 00:19:43.197 10:49:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:43.197 10:49:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:43.197 { 00:19:43.197 "params": { 00:19:43.197 "name": "Nvme$subsystem", 00:19:43.197 "trtype": "$TEST_TRANSPORT", 00:19:43.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.197 "adrfam": "ipv4", 00:19:43.197 "trsvcid": "$NVMF_PORT", 00:19:43.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.197 "hdgst": ${hdgst:-false}, 00:19:43.197 "ddgst": ${ddgst:-false} 00:19:43.197 }, 00:19:43.197 "method": "bdev_nvme_attach_controller" 00:19:43.197 } 00:19:43.197 EOF 00:19:43.197 )") 00:19:43.197 10:49:59 -- nvmf/common.sh@542 -- # cat 00:19:43.197 10:49:59 -- nvmf/common.sh@544 -- # jq . 00:19:43.197 10:49:59 -- nvmf/common.sh@545 -- # IFS=, 00:19:43.197 10:49:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:43.197 "params": { 00:19:43.197 "name": "Nvme1", 00:19:43.197 "trtype": "tcp", 00:19:43.197 "traddr": "10.0.0.2", 00:19:43.197 "adrfam": "ipv4", 00:19:43.197 "trsvcid": "4420", 00:19:43.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.197 "hdgst": false, 00:19:43.197 "ddgst": false 00:19:43.197 }, 00:19:43.197 "method": "bdev_nvme_attach_controller" 00:19:43.197 }' 00:19:43.197 [2024-07-10 10:49:59.865873] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:43.197 [2024-07-10 10:49:59.865945] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3478673 ] 00:19:43.197 [2024-07-10 10:49:59.925988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:43.197 [2024-07-10 10:50:00.010742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.197 [2024-07-10 10:50:00.010797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.197 [2024-07-10 10:50:00.010800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.763 [2024-07-10 10:50:00.287261] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:43.763 [2024-07-10 10:50:00.287311] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:43.763 I/O targets: 00:19:43.763 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:43.763 00:19:43.763 00:19:43.763 CUnit - A unit testing framework for C - Version 2.1-3 00:19:43.763 http://cunit.sourceforge.net/ 00:19:43.763 00:19:43.763 00:19:43.763 Suite: bdevio tests on: Nvme1n1 00:19:43.763 Test: blockdev write read block ...passed 00:19:43.763 Test: blockdev write zeroes read block ...passed 00:19:43.763 Test: blockdev write zeroes read no split ...passed 00:19:43.763 Test: blockdev write zeroes read split ...passed 00:19:43.763 Test: blockdev write zeroes read split partial ...passed 00:19:43.763 Test: blockdev reset ...[2024-07-10 10:50:00.497883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.763 [2024-07-10 10:50:00.497990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8ef0 (9): Bad file descriptor 00:19:43.763 [2024-07-10 10:50:00.557431] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:43.763 passed 00:19:44.021 Test: blockdev write read 8 blocks ...passed 00:19:44.021 Test: blockdev write read size > 128k ...passed 00:19:44.021 Test: blockdev write read invalid size ...passed 00:19:44.021 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:44.021 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:44.021 Test: blockdev write read max offset ...passed 00:19:44.021 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:44.021 Test: blockdev writev readv 8 blocks ...passed 00:19:44.021 Test: blockdev writev readv 30 x 1block ...passed 00:19:44.021 Test: blockdev writev readv block ...passed 00:19:44.021 Test: blockdev writev readv size > 128k ...passed 00:19:44.021 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:44.021 Test: blockdev comparev and writev ...[2024-07-10 10:50:00.771453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.021 [2024-07-10 10:50:00.771488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.021 [2024-07-10 10:50:00.771512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.021 [2024-07-10 10:50:00.771529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:44.021 [2024-07-10 10:50:00.771900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.021 [2024-07-10 10:50:00.771925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:44.021 [2024-07-10 10:50:00.771947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.021 [2024-07-10 10:50:00.771962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:44.021 [2024-07-10 10:50:00.772360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.021 [2024-07-10 10:50:00.772384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:44.021 [2024-07-10 10:50:00.772404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.021 [2024-07-10 10:50:00.772420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:44.021 [2024-07-10 10:50:00.772795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.021 [2024-07-10 10:50:00.772819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:44.021 [2024-07-10 10:50:00.772840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.021 [2024-07-10 10:50:00.772855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:44.021 passed 00:19:44.280 Test: blockdev nvme passthru rw ...passed 00:19:44.280 Test: blockdev nvme passthru vendor specific ...[2024-07-10 10:50:00.854792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.280 [2024-07-10 10:50:00.854825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:44.280 [2024-07-10 10:50:00.855010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.280 [2024-07-10 10:50:00.855033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:44.280 [2024-07-10 10:50:00.855212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.280 [2024-07-10 10:50:00.855236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:44.280 [2024-07-10 10:50:00.855408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.280 [2024-07-10 10:50:00.855439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:44.280 passed 00:19:44.280 Test: blockdev nvme admin passthru ...passed 00:19:44.280 Test: blockdev copy ...passed 00:19:44.280 00:19:44.280 Run Summary: Type Total Ran Passed Failed Inactive 00:19:44.280 suites 1 1 n/a 0 0 00:19:44.280 tests 23 23 23 0 0 00:19:44.280 asserts 152 152 152 0 n/a 00:19:44.280 00:19:44.280 Elapsed time = 1.244 seconds 00:19:44.539 10:50:01 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:44.539 10:50:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.539 10:50:01 -- common/autotest_common.sh@10 -- # set +x 00:19:44.539 10:50:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.539 10:50:01 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:44.539 10:50:01 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:44.539 10:50:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:44.539 10:50:01 -- nvmf/common.sh@116 -- # sync 00:19:44.539 10:50:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:44.539 10:50:01 -- nvmf/common.sh@119 -- # set +e 00:19:44.539 10:50:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:44.539 10:50:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:44.539 rmmod nvme_tcp 00:19:44.539 rmmod nvme_fabrics 00:19:44.539 rmmod nvme_keyring 00:19:44.539 10:50:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:44.539 10:50:01 -- nvmf/common.sh@123 -- # set -e 00:19:44.539 10:50:01 -- nvmf/common.sh@124 -- # return 0 00:19:44.539 10:50:01 -- nvmf/common.sh@477 -- # '[' -n 3478510 ']' 00:19:44.539 10:50:01 -- nvmf/common.sh@478 -- # killprocess 3478510 00:19:44.539 10:50:01 -- common/autotest_common.sh@926 -- # '[' -z 3478510 ']' 00:19:44.539 10:50:01 -- common/autotest_common.sh@930 -- # kill -0 3478510 00:19:44.539 10:50:01 -- common/autotest_common.sh@931 -- # uname 00:19:44.539 10:50:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:44.539 10:50:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3478510 00:19:44.539 10:50:01 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:44.539 10:50:01 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:44.539 10:50:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3478510' 00:19:44.539 killing process with pid 3478510 00:19:44.539 10:50:01 -- common/autotest_common.sh@945 -- # kill 3478510 00:19:44.539 10:50:01 -- common/autotest_common.sh@950 -- # wait 3478510 00:19:45.107 10:50:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:45.107 10:50:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:45.107 10:50:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:45.107 10:50:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.107 10:50:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:45.107 10:50:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.107 10:50:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.107 10:50:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.009 10:50:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:47.009 00:19:47.009 real 0m7.145s 00:19:47.009 user 0m13.699s 00:19:47.009 sys 0m2.562s 00:19:47.009 10:50:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.009 10:50:03 -- common/autotest_common.sh@10 -- # set +x 00:19:47.009 ************************************ 00:19:47.009 END TEST nvmf_bdevio_no_huge 00:19:47.009 ************************************ 00:19:47.009 10:50:03 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:47.009 10:50:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:47.009 10:50:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:47.009 10:50:03 -- common/autotest_common.sh@10 -- # set +x 00:19:47.009 ************************************ 00:19:47.009 START TEST nvmf_tls 00:19:47.009 ************************************ 00:19:47.009 10:50:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:47.009 * Looking for test storage... 00:19:47.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.009 10:50:03 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.009 10:50:03 -- nvmf/common.sh@7 -- # uname -s 00:19:47.009 10:50:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.009 10:50:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.009 10:50:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.009 10:50:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.009 10:50:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.009 10:50:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.009 10:50:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.009 10:50:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.009 10:50:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.009 10:50:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.009 10:50:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.009 10:50:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.009 10:50:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.010 10:50:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.010 10:50:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.010 10:50:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.268 10:50:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.268 10:50:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.268 10:50:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.268 10:50:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.268 10:50:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.268 10:50:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.268 10:50:03 -- paths/export.sh@5 -- # export PATH 00:19:47.269 10:50:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.269 10:50:03 -- nvmf/common.sh@46 -- # : 0 00:19:47.269 10:50:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.269 10:50:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.269 10:50:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.269 10:50:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.269 10:50:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.269 10:50:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.269 10:50:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.269 10:50:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.269 10:50:03 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:47.269 10:50:03 -- target/tls.sh@71 -- # nvmftestinit 00:19:47.269 10:50:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:47.269 10:50:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.269 10:50:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:47.269 10:50:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:47.269 10:50:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:47.269 10:50:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.269 10:50:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.269 10:50:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.269 10:50:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:47.269 10:50:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:47.269 10:50:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:47.269 10:50:03 -- common/autotest_common.sh@10 -- # set +x 00:19:49.170 10:50:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:49.170 10:50:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:49.170 10:50:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:49.170 10:50:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:49.170 10:50:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:49.170 10:50:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:49.170 10:50:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:49.170 10:50:05 -- nvmf/common.sh@294 -- # net_devs=() 00:19:49.170 10:50:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:49.170 10:50:05 -- nvmf/common.sh@295 -- # e810=() 00:19:49.170 10:50:05 -- nvmf/common.sh@295 -- # local -ga e810 00:19:49.170 10:50:05 -- nvmf/common.sh@296 -- # x722=() 00:19:49.170 10:50:05 -- nvmf/common.sh@296 -- # local -ga x722 00:19:49.170 10:50:05 -- nvmf/common.sh@297 -- # mlx=() 00:19:49.170 10:50:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:49.170 10:50:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.170 10:50:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.170 10:50:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.170 10:50:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.170 10:50:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.170 10:50:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.170 10:50:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.170 10:50:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.170 10:50:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.170 10:50:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.170 10:50:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.170 10:50:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:49.170 10:50:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:49.170 10:50:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:49.170 10:50:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.170 10:50:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:49.170 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:49.170 10:50:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.170 10:50:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:49.170 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:49.170 10:50:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:49.170 10:50:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.170 10:50:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.170 10:50:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.170 10:50:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.170 10:50:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:49.170 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:49.170 10:50:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.170 10:50:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.170 10:50:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.170 10:50:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.170 10:50:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.170 10:50:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:49.170 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:49.170 10:50:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.170 10:50:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:49.170 10:50:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:49.170 10:50:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:49.170 10:50:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.170 10:50:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.170 10:50:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.170 10:50:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:49.170 10:50:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.170 10:50:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.170 10:50:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:49.170 10:50:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.170 10:50:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.170 10:50:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:49.170 10:50:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:49.170 10:50:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.170 10:50:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.170 10:50:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.170 10:50:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.170 10:50:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:49.170 10:50:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.170 10:50:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.170 10:50:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.170 10:50:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:49.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:19:49.170 00:19:49.170 --- 10.0.0.2 ping statistics --- 00:19:49.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.170 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:19:49.170 10:50:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:19:49.170 00:19:49.170 --- 10.0.0.1 ping statistics --- 00:19:49.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.170 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:19:49.170 10:50:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.170 10:50:05 -- nvmf/common.sh@410 -- # return 0 00:19:49.170 10:50:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:49.170 10:50:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.170 10:50:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:49.170 10:50:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.170 10:50:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:49.170 10:50:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:49.170 10:50:05 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:49.170 10:50:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:49.170 10:50:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:49.170 10:50:05 -- common/autotest_common.sh@10 -- # set +x 00:19:49.170 10:50:05 -- nvmf/common.sh@469 -- # nvmfpid=3480758 00:19:49.170 10:50:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:49.170 10:50:05 -- nvmf/common.sh@470 -- # waitforlisten 3480758 00:19:49.170 10:50:05 -- common/autotest_common.sh@819 -- # '[' -z 3480758 ']' 00:19:49.170 10:50:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.170 10:50:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:49.170 10:50:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.170 10:50:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:49.170 10:50:05 -- common/autotest_common.sh@10 -- # set +x 00:19:49.170 [2024-07-10 10:50:05.840692] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:49.170 [2024-07-10 10:50:05.840759] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.170 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.170 [2024-07-10 10:50:05.909028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.429 [2024-07-10 10:50:06.004621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:49.429 [2024-07-10 10:50:06.004776] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.429 [2024-07-10 10:50:06.004807] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.429 [2024-07-10 10:50:06.004820] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.429 [2024-07-10 10:50:06.004858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.429 10:50:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:49.429 10:50:06 -- common/autotest_common.sh@852 -- # return 0 00:19:49.429 10:50:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:49.429 10:50:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:49.429 10:50:06 -- common/autotest_common.sh@10 -- # set +x 00:19:49.429 10:50:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.429 10:50:06 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:49.429 10:50:06 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:49.687 true 00:19:49.687 10:50:06 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:49.687 10:50:06 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:49.945 10:50:06 -- target/tls.sh@82 -- # version=0 00:19:49.945 10:50:06 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:49.945 10:50:06 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:50.203 10:50:06 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:50.203 10:50:06 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.461 10:50:07 -- target/tls.sh@90 -- # version=13 00:19:50.461 10:50:07 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:50.461 10:50:07 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:50.720 10:50:07 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.720 10:50:07 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:50.978 10:50:07 -- target/tls.sh@98 -- # version=7 00:19:50.978 10:50:07 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:50.978 10:50:07 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.978 10:50:07 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:51.236 10:50:07 -- target/tls.sh@105 -- # ktls=false 00:19:51.236 10:50:07 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:51.236 10:50:07 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:51.494 10:50:08 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:51.494 10:50:08 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:51.752 10:50:08 -- target/tls.sh@113 -- # ktls=true 00:19:51.752 10:50:08 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:51.752 10:50:08 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:51.752 10:50:08 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:51.752 10:50:08 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:52.010 10:50:08 -- target/tls.sh@121 -- # ktls=false 00:19:52.010 10:50:08 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:52.010 10:50:08 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:52.010 10:50:08 -- target/tls.sh@49 -- # local key hash crc 00:19:52.010 10:50:08 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:52.010 10:50:08 -- target/tls.sh@51 -- # hash=01 00:19:52.010 10:50:08 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:52.010 10:50:08 -- target/tls.sh@52 -- # gzip -1 -c 00:19:52.010 10:50:08 -- target/tls.sh@52 -- # tail -c8 00:19:52.010 10:50:08 -- target/tls.sh@52 -- # head -c 4 00:19:52.010 10:50:08 -- target/tls.sh@52 -- # crc='p$H�' 00:19:52.010 10:50:08 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:52.010 10:50:08 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:52.010 10:50:08 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:52.010 10:50:08 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:52.010 10:50:08 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:52.010 10:50:08 -- target/tls.sh@49 -- # local key hash crc 00:19:52.010 10:50:08 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:52.010 10:50:08 -- target/tls.sh@51 -- # hash=01 00:19:52.010 10:50:08 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:52.010 10:50:08 -- target/tls.sh@52 -- # gzip -1 -c 00:19:52.010 10:50:08 -- target/tls.sh@52 -- # tail -c8 00:19:52.010 10:50:08 -- target/tls.sh@52 -- # head -c 4 00:19:52.010 10:50:08 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:52.010 10:50:08 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:52.010 10:50:08 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:52.010 10:50:08 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:52.269 10:50:08 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:52.269 10:50:08 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.269 10:50:08 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:52.269 10:50:08 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:52.269 10:50:08 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:52.269 10:50:08 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.269 10:50:08 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:52.269 10:50:08 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:52.269 10:50:09 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:52.834 10:50:09 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.834 10:50:09 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.834 10:50:09 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:52.834 [2024-07-10 10:50:09.637889] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.834 10:50:09 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.093 10:50:09 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.351 [2024-07-10 10:50:10.119200] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.351 [2024-07-10 10:50:10.119484] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.351 10:50:10 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:53.608 malloc0 00:19:53.608 10:50:10 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.173 10:50:10 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:54.173 10:50:10 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:54.173 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.366 Initializing NVMe Controllers 00:20:06.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:06.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:06.366 Initialization complete. Launching workers. 00:20:06.366 ======================================================== 00:20:06.366 Latency(us) 00:20:06.366 Device Information : IOPS MiB/s Average min max 00:20:06.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7842.65 30.64 8162.91 1267.50 9037.75 00:20:06.366 ======================================================== 00:20:06.366 Total : 7842.65 30.64 8162.91 1267.50 9037.75 00:20:06.366 00:20:06.366 10:50:21 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:06.366 10:50:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:06.366 10:50:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:06.366 10:50:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:06.366 10:50:21 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:06.366 10:50:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.366 10:50:21 -- target/tls.sh@28 -- # bdevperf_pid=3482591 00:20:06.366 10:50:21 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.366 10:50:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.366 10:50:21 -- target/tls.sh@31 -- # waitforlisten 3482591 /var/tmp/bdevperf.sock 00:20:06.366 10:50:21 -- common/autotest_common.sh@819 -- # '[' -z 3482591 ']' 00:20:06.366 10:50:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.366 10:50:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:06.366 10:50:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.366 10:50:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:06.366 10:50:21 -- common/autotest_common.sh@10 -- # set +x 00:20:06.366 [2024-07-10 10:50:21.086746] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:06.366 [2024-07-10 10:50:21.086830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482591 ] 00:20:06.366 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.366 [2024-07-10 10:50:21.143632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.366 [2024-07-10 10:50:21.225167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.366 10:50:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:06.366 10:50:22 -- common/autotest_common.sh@852 -- # return 0 00:20:06.366 10:50:22 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:06.366 [2024-07-10 10:50:22.333887] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.366 TLSTESTn1 00:20:06.366 10:50:22 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:06.366 Running I/O for 10 seconds... 00:20:16.323 00:20:16.324 Latency(us) 00:20:16.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.324 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:16.324 Verification LBA range: start 0x0 length 0x2000 00:20:16.324 TLSTESTn1 : 10.05 2031.73 7.94 0.00 0.00 62821.80 8349.77 66409.81 00:20:16.324 =================================================================================================================== 00:20:16.324 Total : 2031.73 7.94 0.00 0.00 62821.80 8349.77 66409.81 00:20:16.324 0 00:20:16.324 10:50:32 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:16.324 10:50:32 -- target/tls.sh@45 -- # killprocess 3482591 00:20:16.324 10:50:32 -- common/autotest_common.sh@926 -- # '[' -z 3482591 ']' 00:20:16.324 10:50:32 -- common/autotest_common.sh@930 -- # kill -0 3482591 00:20:16.324 10:50:32 -- common/autotest_common.sh@931 -- # uname 00:20:16.324 10:50:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:16.324 10:50:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3482591 00:20:16.324 10:50:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:16.324 10:50:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:16.324 10:50:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3482591' 00:20:16.324 killing process with pid 3482591 00:20:16.324 10:50:32 -- common/autotest_common.sh@945 -- # kill 3482591 00:20:16.324 Received shutdown signal, test time was about 10.000000 seconds 00:20:16.324 00:20:16.324 Latency(us) 00:20:16.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.324 =================================================================================================================== 00:20:16.324 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.324 10:50:32 -- common/autotest_common.sh@950 -- # wait 3482591 00:20:16.324 10:50:32 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:16.324 10:50:32 -- common/autotest_common.sh@640 -- # local es=0 00:20:16.324 10:50:32 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:16.324 10:50:32 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:16.324 10:50:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:16.324 10:50:32 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:16.324 10:50:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:16.324 10:50:32 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:16.324 10:50:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:16.324 10:50:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:16.324 10:50:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:16.324 10:50:32 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:16.324 10:50:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:16.324 10:50:32 -- target/tls.sh@28 -- # bdevperf_pid=3484086 00:20:16.324 10:50:32 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:16.324 10:50:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:16.324 10:50:32 -- target/tls.sh@31 -- # waitforlisten 3484086 /var/tmp/bdevperf.sock 00:20:16.324 10:50:32 -- common/autotest_common.sh@819 -- # '[' -z 3484086 ']' 00:20:16.324 10:50:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.324 10:50:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:16.324 10:50:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.324 10:50:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:16.324 10:50:32 -- common/autotest_common.sh@10 -- # set +x 00:20:16.324 [2024-07-10 10:50:32.902356] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:16.324 [2024-07-10 10:50:32.902444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484086 ] 00:20:16.324 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.324 [2024-07-10 10:50:32.961003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.324 [2024-07-10 10:50:33.045809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.255 10:50:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:17.255 10:50:33 -- common/autotest_common.sh@852 -- # return 0 00:20:17.255 10:50:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:17.513 [2024-07-10 10:50:34.099362] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.513 [2024-07-10 10:50:34.104866] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:17.513 [2024-07-10 10:50:34.105380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbc130 (107): Transport endpoint is not connected 00:20:17.513 [2024-07-10 10:50:34.106370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbc130 (9): Bad file descriptor 00:20:17.513 [2024-07-10 10:50:34.107368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:17.513 [2024-07-10 10:50:34.107388] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:17.513 [2024-07-10 10:50:34.107416] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:17.513 request: 00:20:17.513 { 00:20:17.513 "name": "TLSTEST", 00:20:17.513 "trtype": "tcp", 00:20:17.513 "traddr": "10.0.0.2", 00:20:17.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.513 "adrfam": "ipv4", 00:20:17.513 "trsvcid": "4420", 00:20:17.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.514 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:17.514 "method": "bdev_nvme_attach_controller", 00:20:17.514 "req_id": 1 00:20:17.514 } 00:20:17.514 Got JSON-RPC error response 00:20:17.514 response: 00:20:17.514 { 00:20:17.514 "code": -32602, 00:20:17.514 "message": "Invalid parameters" 00:20:17.514 } 00:20:17.514 10:50:34 -- target/tls.sh@36 -- # killprocess 3484086 00:20:17.514 10:50:34 -- common/autotest_common.sh@926 -- # '[' -z 3484086 ']' 00:20:17.514 10:50:34 -- common/autotest_common.sh@930 -- # kill -0 3484086 00:20:17.514 10:50:34 -- common/autotest_common.sh@931 -- # uname 00:20:17.514 10:50:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.514 10:50:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3484086 00:20:17.514 10:50:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:17.514 10:50:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:17.514 10:50:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3484086' 00:20:17.514 killing process with pid 3484086 00:20:17.514 10:50:34 -- common/autotest_common.sh@945 -- # kill 3484086 00:20:17.514 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.514 00:20:17.514 Latency(us) 00:20:17.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.514 =================================================================================================================== 00:20:17.514 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:17.514 10:50:34 -- common/autotest_common.sh@950 -- # wait 3484086 00:20:17.771 10:50:34 -- target/tls.sh@37 -- # return 1 00:20:17.771 10:50:34 -- common/autotest_common.sh@643 -- # es=1 00:20:17.771 10:50:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:17.771 10:50:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:17.771 10:50:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:17.771 10:50:34 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:17.771 10:50:34 -- common/autotest_common.sh@640 -- # local es=0 00:20:17.771 10:50:34 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:17.771 10:50:34 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:17.771 10:50:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.771 10:50:34 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:17.771 10:50:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.771 10:50:34 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:17.771 10:50:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:17.771 10:50:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:17.771 10:50:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:17.771 10:50:34 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:17.771 10:50:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.771 10:50:34 -- target/tls.sh@28 -- # bdevperf_pid=3484238 00:20:17.771 10:50:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:17.771 10:50:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.771 10:50:34 -- target/tls.sh@31 -- # waitforlisten 3484238 /var/tmp/bdevperf.sock 00:20:17.771 10:50:34 -- common/autotest_common.sh@819 -- # '[' -z 3484238 ']' 00:20:17.771 10:50:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.771 10:50:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:17.771 10:50:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.771 10:50:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:17.771 10:50:34 -- common/autotest_common.sh@10 -- # set +x 00:20:17.771 [2024-07-10 10:50:34.422552] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:17.771 [2024-07-10 10:50:34.422629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484238 ] 00:20:17.771 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.771 [2024-07-10 10:50:34.480183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.771 [2024-07-10 10:50:34.563114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.704 10:50:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.704 10:50:35 -- common/autotest_common.sh@852 -- # return 0 00:20:18.704 10:50:35 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.961 [2024-07-10 10:50:35.656259] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.961 [2024-07-10 10:50:35.667754] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:18.961 [2024-07-10 10:50:35.667788] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:18.961 [2024-07-10 10:50:35.667909] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:18.961 [2024-07-10 10:50:35.668169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd18130 (107): Transport endpoint is not connected 00:20:18.961 [2024-07-10 10:50:35.669159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd18130 (9): Bad file descriptor 00:20:18.961 [2024-07-10 10:50:35.670158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:18.961 [2024-07-10 10:50:35.670176] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:18.961 [2024-07-10 10:50:35.670203] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:18.961 request: 00:20:18.961 { 00:20:18.961 "name": "TLSTEST", 00:20:18.961 "trtype": "tcp", 00:20:18.961 "traddr": "10.0.0.2", 00:20:18.961 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:18.961 "adrfam": "ipv4", 00:20:18.961 "trsvcid": "4420", 00:20:18.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.961 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:18.961 "method": "bdev_nvme_attach_controller", 00:20:18.961 "req_id": 1 00:20:18.961 } 00:20:18.961 Got JSON-RPC error response 00:20:18.961 response: 00:20:18.961 { 00:20:18.961 "code": -32602, 00:20:18.961 "message": "Invalid parameters" 00:20:18.961 } 00:20:18.961 10:50:35 -- target/tls.sh@36 -- # killprocess 3484238 00:20:18.961 10:50:35 -- common/autotest_common.sh@926 -- # '[' -z 3484238 ']' 00:20:18.961 10:50:35 -- common/autotest_common.sh@930 -- # kill -0 3484238 00:20:18.961 10:50:35 -- common/autotest_common.sh@931 -- # uname 00:20:18.961 10:50:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:18.961 10:50:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3484238 00:20:18.961 10:50:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:18.961 10:50:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:18.961 10:50:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3484238' 00:20:18.961 killing process with pid 3484238 00:20:18.961 10:50:35 -- common/autotest_common.sh@945 -- # kill 3484238 00:20:18.961 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.961 00:20:18.961 Latency(us) 00:20:18.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.961 =================================================================================================================== 00:20:18.961 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.961 10:50:35 -- common/autotest_common.sh@950 -- # wait 3484238 00:20:19.219 10:50:35 -- target/tls.sh@37 -- # return 1 00:20:19.219 10:50:35 -- common/autotest_common.sh@643 -- # es=1 00:20:19.219 10:50:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:19.219 10:50:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:19.219 10:50:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:19.219 10:50:35 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:19.219 10:50:35 -- common/autotest_common.sh@640 -- # local es=0 00:20:19.219 10:50:35 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:19.219 10:50:35 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:19.219 10:50:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:19.219 10:50:35 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:19.219 10:50:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:19.219 10:50:35 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:19.219 10:50:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.219 10:50:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:19.219 10:50:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:19.219 10:50:35 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:19.219 10:50:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.219 10:50:35 -- target/tls.sh@28 -- # bdevperf_pid=3484386 00:20:19.219 10:50:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.219 10:50:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.219 10:50:35 -- target/tls.sh@31 -- # waitforlisten 3484386 /var/tmp/bdevperf.sock 00:20:19.219 10:50:35 -- common/autotest_common.sh@819 -- # '[' -z 3484386 ']' 00:20:19.219 10:50:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.219 10:50:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:19.219 10:50:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.219 10:50:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:19.219 10:50:35 -- common/autotest_common.sh@10 -- # set +x 00:20:19.219 [2024-07-10 10:50:35.955166] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:19.219 [2024-07-10 10:50:35.955244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484386 ] 00:20:19.219 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.219 [2024-07-10 10:50:36.018238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.476 [2024-07-10 10:50:36.106652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.408 10:50:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:20.408 10:50:36 -- common/autotest_common.sh@852 -- # return 0 00:20:20.408 10:50:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:20.408 [2024-07-10 10:50:37.199192] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.408 [2024-07-10 10:50:37.204572] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:20.408 [2024-07-10 10:50:37.204609] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:20.409 [2024-07-10 10:50:37.204650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:20.409 [2024-07-10 10:50:37.205195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6130 (107): Transport endpoint is not connected 00:20:20.409 [2024-07-10 10:50:37.206183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6130 (9): Bad file descriptor 00:20:20.409 [2024-07-10 10:50:37.207180] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:20.409 [2024-07-10 10:50:37.207199] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:20.409 [2024-07-10 10:50:37.207227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:20.409 request: 00:20:20.409 { 00:20:20.409 "name": "TLSTEST", 00:20:20.409 "trtype": "tcp", 00:20:20.409 "traddr": "10.0.0.2", 00:20:20.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.409 "adrfam": "ipv4", 00:20:20.409 "trsvcid": "4420", 00:20:20.409 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.409 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:20.409 "method": "bdev_nvme_attach_controller", 00:20:20.409 "req_id": 1 00:20:20.409 } 00:20:20.409 Got JSON-RPC error response 00:20:20.409 response: 00:20:20.409 { 00:20:20.409 "code": -32602, 00:20:20.409 "message": "Invalid parameters" 00:20:20.409 } 00:20:20.409 10:50:37 -- target/tls.sh@36 -- # killprocess 3484386 00:20:20.409 10:50:37 -- common/autotest_common.sh@926 -- # '[' -z 3484386 ']' 00:20:20.409 10:50:37 -- common/autotest_common.sh@930 -- # kill -0 3484386 00:20:20.409 10:50:37 -- common/autotest_common.sh@931 -- # uname 00:20:20.409 10:50:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:20.667 10:50:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3484386 00:20:20.667 10:50:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:20.667 10:50:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:20.667 10:50:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3484386' 00:20:20.667 killing process with pid 3484386 00:20:20.667 10:50:37 -- common/autotest_common.sh@945 -- # kill 3484386 00:20:20.667 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.667 00:20:20.667 Latency(us) 00:20:20.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.667 =================================================================================================================== 00:20:20.667 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.667 10:50:37 -- common/autotest_common.sh@950 -- # wait 3484386 00:20:20.667 10:50:37 -- target/tls.sh@37 -- # return 1 00:20:20.667 10:50:37 -- common/autotest_common.sh@643 -- # es=1 00:20:20.667 10:50:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:20.667 10:50:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:20.667 10:50:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:20.667 10:50:37 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.668 10:50:37 -- common/autotest_common.sh@640 -- # local es=0 00:20:20.668 10:50:37 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.668 10:50:37 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:20.668 10:50:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:20.668 10:50:37 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:20.668 10:50:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:20.668 10:50:37 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.668 10:50:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.668 10:50:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.668 10:50:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.668 10:50:37 -- target/tls.sh@23 -- # psk= 00:20:20.668 10:50:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.668 10:50:37 -- target/tls.sh@28 -- # bdevperf_pid=3484662 00:20:20.668 10:50:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.668 10:50:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.668 10:50:37 -- target/tls.sh@31 -- # waitforlisten 3484662 /var/tmp/bdevperf.sock 00:20:20.668 10:50:37 -- common/autotest_common.sh@819 -- # '[' -z 3484662 ']' 00:20:20.668 10:50:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.668 10:50:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:20.668 10:50:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.668 10:50:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:20.668 10:50:37 -- common/autotest_common.sh@10 -- # set +x 00:20:20.926 [2024-07-10 10:50:37.511525] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:20.926 [2024-07-10 10:50:37.511604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484662 ] 00:20:20.926 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.926 [2024-07-10 10:50:37.569929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.926 [2024-07-10 10:50:37.653860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.859 10:50:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:21.859 10:50:38 -- common/autotest_common.sh@852 -- # return 0 00:20:21.859 10:50:38 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:22.117 [2024-07-10 10:50:38.685776] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:22.117 [2024-07-10 10:50:38.688004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1990810 (9): Bad file descriptor 00:20:22.117 [2024-07-10 10:50:38.689000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.117 [2024-07-10 10:50:38.689021] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:22.117 [2024-07-10 10:50:38.689048] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.117 request: 00:20:22.117 { 00:20:22.117 "name": "TLSTEST", 00:20:22.117 "trtype": "tcp", 00:20:22.117 "traddr": "10.0.0.2", 00:20:22.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.117 "adrfam": "ipv4", 00:20:22.117 "trsvcid": "4420", 00:20:22.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.117 "method": "bdev_nvme_attach_controller", 00:20:22.117 "req_id": 1 00:20:22.117 } 00:20:22.117 Got JSON-RPC error response 00:20:22.117 response: 00:20:22.117 { 00:20:22.117 "code": -32602, 00:20:22.117 "message": "Invalid parameters" 00:20:22.117 } 00:20:22.117 10:50:38 -- target/tls.sh@36 -- # killprocess 3484662 00:20:22.117 10:50:38 -- common/autotest_common.sh@926 -- # '[' -z 3484662 ']' 00:20:22.117 10:50:38 -- common/autotest_common.sh@930 -- # kill -0 3484662 00:20:22.117 10:50:38 -- common/autotest_common.sh@931 -- # uname 00:20:22.117 10:50:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:22.117 10:50:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3484662 00:20:22.117 10:50:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:22.117 10:50:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:22.117 10:50:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3484662' 00:20:22.117 killing process with pid 3484662 00:20:22.117 10:50:38 -- common/autotest_common.sh@945 -- # kill 3484662 00:20:22.117 Received shutdown signal, test time was about 10.000000 seconds 00:20:22.117 00:20:22.117 Latency(us) 00:20:22.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.117 =================================================================================================================== 00:20:22.118 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:22.118 10:50:38 -- common/autotest_common.sh@950 -- # wait 3484662 00:20:22.376 10:50:38 -- target/tls.sh@37 -- # return 1 00:20:22.376 10:50:38 -- common/autotest_common.sh@643 -- # es=1 00:20:22.376 10:50:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:22.376 10:50:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:22.376 10:50:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:22.376 10:50:38 -- target/tls.sh@167 -- # killprocess 3480758 00:20:22.376 10:50:38 -- common/autotest_common.sh@926 -- # '[' -z 3480758 ']' 00:20:22.376 10:50:38 -- common/autotest_common.sh@930 -- # kill -0 3480758 00:20:22.376 10:50:38 -- common/autotest_common.sh@931 -- # uname 00:20:22.376 10:50:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:22.376 10:50:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3480758 00:20:22.376 10:50:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:22.376 10:50:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:22.376 10:50:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3480758' 00:20:22.376 killing process with pid 3480758 00:20:22.376 10:50:38 -- common/autotest_common.sh@945 -- # kill 3480758 00:20:22.376 10:50:38 -- common/autotest_common.sh@950 -- # wait 3480758 00:20:22.634 10:50:39 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:22.634 10:50:39 -- target/tls.sh@49 -- # local key hash crc 00:20:22.634 10:50:39 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:22.634 10:50:39 -- target/tls.sh@51 -- # hash=02 00:20:22.634 10:50:39 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:22.634 10:50:39 -- target/tls.sh@52 -- # gzip -1 -c 00:20:22.634 10:50:39 -- target/tls.sh@52 -- # tail -c8 00:20:22.634 10:50:39 -- target/tls.sh@52 -- # head -c 4 00:20:22.634 10:50:39 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:22.634 10:50:39 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:22.634 10:50:39 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:22.634 10:50:39 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:22.634 10:50:39 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:22.634 10:50:39 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:22.634 10:50:39 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:22.634 10:50:39 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:22.634 10:50:39 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:22.634 10:50:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:22.634 10:50:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:22.634 10:50:39 -- common/autotest_common.sh@10 -- # set +x 00:20:22.634 10:50:39 -- nvmf/common.sh@469 -- # nvmfpid=3484825 00:20:22.634 10:50:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:22.634 10:50:39 -- nvmf/common.sh@470 -- # waitforlisten 3484825 00:20:22.634 10:50:39 -- common/autotest_common.sh@819 -- # '[' -z 3484825 ']' 00:20:22.634 10:50:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.634 10:50:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:22.634 10:50:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.634 10:50:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:22.634 10:50:39 -- common/autotest_common.sh@10 -- # set +x 00:20:22.634 [2024-07-10 10:50:39.276678] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:22.634 [2024-07-10 10:50:39.276772] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.634 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.634 [2024-07-10 10:50:39.343351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.634 [2024-07-10 10:50:39.427769] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:22.634 [2024-07-10 10:50:39.427923] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.634 [2024-07-10 10:50:39.427940] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.634 [2024-07-10 10:50:39.427952] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.634 [2024-07-10 10:50:39.427986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.567 10:50:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:23.567 10:50:40 -- common/autotest_common.sh@852 -- # return 0 00:20:23.567 10:50:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:23.567 10:50:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:23.567 10:50:40 -- common/autotest_common.sh@10 -- # set +x 00:20:23.567 10:50:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.567 10:50:40 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.567 10:50:40 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.567 10:50:40 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:23.826 [2024-07-10 10:50:40.498149] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.826 10:50:40 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:24.083 10:50:40 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:24.340 [2024-07-10 10:50:40.979463] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.340 [2024-07-10 10:50:40.979675] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.340 10:50:40 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:24.599 malloc0 00:20:24.599 10:50:41 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:24.857 10:50:41 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.115 10:50:41 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.115 10:50:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.115 10:50:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.115 10:50:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.115 10:50:41 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:25.115 10:50:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.115 10:50:41 -- target/tls.sh@28 -- # bdevperf_pid=3485243 00:20:25.115 10:50:41 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.115 10:50:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.115 10:50:41 -- target/tls.sh@31 -- # waitforlisten 3485243 /var/tmp/bdevperf.sock 00:20:25.115 10:50:41 -- common/autotest_common.sh@819 -- # '[' -z 3485243 ']' 00:20:25.115 10:50:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.115 10:50:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:25.115 10:50:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.115 10:50:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:25.115 10:50:41 -- common/autotest_common.sh@10 -- # set +x 00:20:25.115 [2024-07-10 10:50:41.758985] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:25.115 [2024-07-10 10:50:41.759061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3485243 ] 00:20:25.115 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.115 [2024-07-10 10:50:41.816550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.115 [2024-07-10 10:50:41.898213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.050 10:50:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:26.050 10:50:42 -- common/autotest_common.sh@852 -- # return 0 00:20:26.050 10:50:42 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:26.306 [2024-07-10 10:50:42.907857] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.306 TLSTESTn1 00:20:26.306 10:50:42 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:26.306 Running I/O for 10 seconds... 00:20:38.538 00:20:38.538 Latency(us) 00:20:38.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.538 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:38.538 Verification LBA range: start 0x0 length 0x2000 00:20:38.538 TLSTESTn1 : 10.02 2816.20 11.00 0.00 0.00 45391.07 4611.79 48351.00 00:20:38.538 =================================================================================================================== 00:20:38.538 Total : 2816.20 11.00 0.00 0.00 45391.07 4611.79 48351.00 00:20:38.538 0 00:20:38.538 10:50:53 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:38.538 10:50:53 -- target/tls.sh@45 -- # killprocess 3485243 00:20:38.538 10:50:53 -- common/autotest_common.sh@926 -- # '[' -z 3485243 ']' 00:20:38.538 10:50:53 -- common/autotest_common.sh@930 -- # kill -0 3485243 00:20:38.538 10:50:53 -- common/autotest_common.sh@931 -- # uname 00:20:38.538 10:50:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:38.538 10:50:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3485243 00:20:38.538 10:50:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:38.538 10:50:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:38.538 10:50:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3485243' 00:20:38.538 killing process with pid 3485243 00:20:38.538 10:50:53 -- common/autotest_common.sh@945 -- # kill 3485243 00:20:38.538 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.538 00:20:38.538 Latency(us) 00:20:38.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.538 =================================================================================================================== 00:20:38.538 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.538 10:50:53 -- common/autotest_common.sh@950 -- # wait 3485243 00:20:38.538 10:50:53 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.538 10:50:53 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.538 10:50:53 -- common/autotest_common.sh@640 -- # local es=0 00:20:38.538 10:50:53 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.538 10:50:53 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:38.538 10:50:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:38.538 10:50:53 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:38.538 10:50:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:38.538 10:50:53 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.538 10:50:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:38.538 10:50:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:38.538 10:50:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:38.538 10:50:53 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:38.538 10:50:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.538 10:50:53 -- target/tls.sh@28 -- # bdevperf_pid=3486617 00:20:38.538 10:50:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:38.538 10:50:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.538 10:50:53 -- target/tls.sh@31 -- # waitforlisten 3486617 /var/tmp/bdevperf.sock 00:20:38.538 10:50:53 -- common/autotest_common.sh@819 -- # '[' -z 3486617 ']' 00:20:38.538 10:50:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.538 10:50:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:38.538 10:50:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.538 10:50:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:38.538 10:50:53 -- common/autotest_common.sh@10 -- # set +x 00:20:38.538 [2024-07-10 10:50:53.449234] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:38.538 [2024-07-10 10:50:53.449317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3486617 ] 00:20:38.538 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.538 [2024-07-10 10:50:53.506499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.538 [2024-07-10 10:50:53.587533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.539 10:50:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:38.539 10:50:54 -- common/autotest_common.sh@852 -- # return 0 00:20:38.539 10:50:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.539 [2024-07-10 10:50:54.601198] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.539 [2024-07-10 10:50:54.601254] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:38.539 request: 00:20:38.539 { 00:20:38.539 "name": "TLSTEST", 00:20:38.539 "trtype": "tcp", 00:20:38.539 "traddr": "10.0.0.2", 00:20:38.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.539 "adrfam": "ipv4", 00:20:38.539 "trsvcid": "4420", 00:20:38.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.539 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:38.539 "method": "bdev_nvme_attach_controller", 00:20:38.539 "req_id": 1 00:20:38.539 } 00:20:38.539 Got JSON-RPC error response 00:20:38.539 response: 00:20:38.539 { 00:20:38.539 "code": -22, 00:20:38.539 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:38.539 } 00:20:38.539 10:50:54 -- target/tls.sh@36 -- # killprocess 3486617 00:20:38.539 10:50:54 -- common/autotest_common.sh@926 -- # '[' -z 3486617 ']' 00:20:38.539 10:50:54 -- common/autotest_common.sh@930 -- # kill -0 3486617 00:20:38.539 10:50:54 -- common/autotest_common.sh@931 -- # uname 00:20:38.539 10:50:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:38.539 10:50:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3486617 00:20:38.539 10:50:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:38.539 10:50:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:38.539 10:50:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3486617' 00:20:38.539 killing process with pid 3486617 00:20:38.539 10:50:54 -- common/autotest_common.sh@945 -- # kill 3486617 00:20:38.539 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.539 00:20:38.539 Latency(us) 00:20:38.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.539 =================================================================================================================== 00:20:38.539 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.539 10:50:54 -- common/autotest_common.sh@950 -- # wait 3486617 00:20:38.539 10:50:54 -- target/tls.sh@37 -- # return 1 00:20:38.539 10:50:54 -- common/autotest_common.sh@643 -- # es=1 00:20:38.539 10:50:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:38.539 10:50:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:38.539 10:50:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:38.539 10:50:54 -- target/tls.sh@183 -- # killprocess 3484825 00:20:38.539 10:50:54 -- common/autotest_common.sh@926 -- # '[' -z 3484825 ']' 00:20:38.539 10:50:54 -- common/autotest_common.sh@930 -- # kill -0 3484825 00:20:38.539 10:50:54 -- common/autotest_common.sh@931 -- # uname 00:20:38.539 10:50:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:38.539 10:50:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3484825 00:20:38.539 10:50:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:38.539 10:50:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:38.539 10:50:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3484825' 00:20:38.539 killing process with pid 3484825 00:20:38.539 10:50:54 -- common/autotest_common.sh@945 -- # kill 3484825 00:20:38.539 10:50:54 -- common/autotest_common.sh@950 -- # wait 3484825 00:20:38.539 10:50:55 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:38.539 10:50:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:38.539 10:50:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:38.539 10:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:38.539 10:50:55 -- nvmf/common.sh@469 -- # nvmfpid=3486793 00:20:38.539 10:50:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.539 10:50:55 -- nvmf/common.sh@470 -- # waitforlisten 3486793 00:20:38.539 10:50:55 -- common/autotest_common.sh@819 -- # '[' -z 3486793 ']' 00:20:38.539 10:50:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.539 10:50:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:38.539 10:50:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.539 10:50:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:38.539 10:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:38.539 [2024-07-10 10:50:55.185523] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:38.539 [2024-07-10 10:50:55.185604] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.539 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.539 [2024-07-10 10:50:55.251293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.539 [2024-07-10 10:50:55.333709] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:38.539 [2024-07-10 10:50:55.333865] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.539 [2024-07-10 10:50:55.333883] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.539 [2024-07-10 10:50:55.333895] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.539 [2024-07-10 10:50:55.333930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.473 10:50:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:39.473 10:50:56 -- common/autotest_common.sh@852 -- # return 0 00:20:39.473 10:50:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:39.473 10:50:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:39.473 10:50:56 -- common/autotest_common.sh@10 -- # set +x 00:20:39.473 10:50:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.473 10:50:56 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.473 10:50:56 -- common/autotest_common.sh@640 -- # local es=0 00:20:39.473 10:50:56 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.473 10:50:56 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:39.473 10:50:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:39.473 10:50:56 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:39.473 10:50:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:39.473 10:50:56 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.473 10:50:56 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.473 10:50:56 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.731 [2024-07-10 10:50:56.447101] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.731 10:50:56 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.988 10:50:56 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:40.246 [2024-07-10 10:50:56.992610] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.246 [2024-07-10 10:50:56.992856] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.246 10:50:57 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:40.504 malloc0 00:20:40.504 10:50:57 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:40.761 10:50:57 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:41.019 [2024-07-10 10:50:57.694189] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:41.019 [2024-07-10 10:50:57.694232] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:41.019 [2024-07-10 10:50:57.694257] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:41.019 request: 00:20:41.019 { 00:20:41.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.019 "host": "nqn.2016-06.io.spdk:host1", 00:20:41.019 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:41.019 "method": "nvmf_subsystem_add_host", 00:20:41.019 "req_id": 1 00:20:41.019 } 00:20:41.019 Got JSON-RPC error response 00:20:41.019 response: 00:20:41.019 { 00:20:41.019 "code": -32603, 00:20:41.019 "message": "Internal error" 00:20:41.019 } 00:20:41.019 10:50:57 -- common/autotest_common.sh@643 -- # es=1 00:20:41.019 10:50:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:41.019 10:50:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:41.019 10:50:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:41.019 10:50:57 -- target/tls.sh@189 -- # killprocess 3486793 00:20:41.019 10:50:57 -- common/autotest_common.sh@926 -- # '[' -z 3486793 ']' 00:20:41.019 10:50:57 -- common/autotest_common.sh@930 -- # kill -0 3486793 00:20:41.019 10:50:57 -- common/autotest_common.sh@931 -- # uname 00:20:41.019 10:50:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:41.019 10:50:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3486793 00:20:41.019 10:50:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:41.019 10:50:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:41.019 10:50:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3486793' 00:20:41.019 killing process with pid 3486793 00:20:41.019 10:50:57 -- common/autotest_common.sh@945 -- # kill 3486793 00:20:41.019 10:50:57 -- common/autotest_common.sh@950 -- # wait 3486793 00:20:41.277 10:50:57 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:41.277 10:50:57 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:41.277 10:50:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:41.277 10:50:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:41.277 10:50:57 -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 10:50:57 -- nvmf/common.sh@469 -- # nvmfpid=3487209 00:20:41.277 10:50:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:41.277 10:50:57 -- nvmf/common.sh@470 -- # waitforlisten 3487209 00:20:41.277 10:50:57 -- common/autotest_common.sh@819 -- # '[' -z 3487209 ']' 00:20:41.277 10:50:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.277 10:50:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:41.277 10:50:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.277 10:50:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:41.277 10:50:57 -- common/autotest_common.sh@10 -- # set +x 00:20:41.277 [2024-07-10 10:50:58.038813] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:41.277 [2024-07-10 10:50:58.038907] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.277 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.535 [2024-07-10 10:50:58.106409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.535 [2024-07-10 10:50:58.194771] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:41.535 [2024-07-10 10:50:58.194938] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.535 [2024-07-10 10:50:58.194958] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.535 [2024-07-10 10:50:58.194972] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.535 [2024-07-10 10:50:58.195012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.469 10:50:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:42.469 10:50:58 -- common/autotest_common.sh@852 -- # return 0 00:20:42.469 10:50:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:42.469 10:50:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:42.469 10:50:58 -- common/autotest_common.sh@10 -- # set +x 00:20:42.469 10:50:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.469 10:50:59 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:42.469 10:50:59 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:42.469 10:50:59 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:42.469 [2024-07-10 10:50:59.232340] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.469 10:50:59 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:42.727 10:50:59 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:42.984 [2024-07-10 10:50:59.717624] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.984 [2024-07-10 10:50:59.717873] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.984 10:50:59 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:43.242 malloc0 00:20:43.242 10:50:59 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:43.500 10:51:00 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:43.758 10:51:00 -- target/tls.sh@197 -- # bdevperf_pid=3487547 00:20:43.758 10:51:00 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.758 10:51:00 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.758 10:51:00 -- target/tls.sh@200 -- # waitforlisten 3487547 /var/tmp/bdevperf.sock 00:20:43.758 10:51:00 -- common/autotest_common.sh@819 -- # '[' -z 3487547 ']' 00:20:43.758 10:51:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.758 10:51:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:43.758 10:51:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.758 10:51:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:43.758 10:51:00 -- common/autotest_common.sh@10 -- # set +x 00:20:43.758 [2024-07-10 10:51:00.511903] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:43.758 [2024-07-10 10:51:00.511976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3487547 ] 00:20:43.758 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.758 [2024-07-10 10:51:00.571771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.015 [2024-07-10 10:51:00.655605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.946 10:51:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:44.946 10:51:01 -- common/autotest_common.sh@852 -- # return 0 00:20:44.946 10:51:01 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:44.946 [2024-07-10 10:51:01.677633] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.946 TLSTESTn1 00:20:44.946 10:51:01 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:45.512 10:51:02 -- target/tls.sh@205 -- # tgtconf='{ 00:20:45.512 "subsystems": [ 00:20:45.512 { 00:20:45.512 "subsystem": "iobuf", 00:20:45.512 "config": [ 00:20:45.512 { 00:20:45.512 "method": "iobuf_set_options", 00:20:45.512 "params": { 00:20:45.512 "small_pool_count": 8192, 00:20:45.512 "large_pool_count": 1024, 00:20:45.512 "small_bufsize": 8192, 00:20:45.512 "large_bufsize": 135168 00:20:45.512 } 00:20:45.512 } 00:20:45.512 ] 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "subsystem": "sock", 00:20:45.512 "config": [ 00:20:45.512 { 00:20:45.512 "method": "sock_impl_set_options", 00:20:45.512 "params": { 00:20:45.512 "impl_name": "posix", 00:20:45.512 "recv_buf_size": 2097152, 00:20:45.512 "send_buf_size": 2097152, 00:20:45.512 "enable_recv_pipe": true, 00:20:45.512 "enable_quickack": false, 00:20:45.512 "enable_placement_id": 0, 00:20:45.512 "enable_zerocopy_send_server": true, 00:20:45.512 "enable_zerocopy_send_client": false, 00:20:45.512 "zerocopy_threshold": 0, 00:20:45.512 "tls_version": 0, 00:20:45.512 "enable_ktls": false 00:20:45.512 } 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "method": "sock_impl_set_options", 00:20:45.512 "params": { 00:20:45.512 "impl_name": "ssl", 00:20:45.512 "recv_buf_size": 4096, 00:20:45.512 "send_buf_size": 4096, 00:20:45.512 "enable_recv_pipe": true, 00:20:45.512 "enable_quickack": false, 00:20:45.512 "enable_placement_id": 0, 00:20:45.512 "enable_zerocopy_send_server": true, 00:20:45.512 "enable_zerocopy_send_client": false, 00:20:45.512 "zerocopy_threshold": 0, 00:20:45.512 "tls_version": 0, 00:20:45.512 "enable_ktls": false 00:20:45.512 } 00:20:45.512 } 00:20:45.512 ] 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "subsystem": "vmd", 00:20:45.512 "config": [] 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "subsystem": "accel", 00:20:45.512 "config": [ 00:20:45.512 { 00:20:45.512 "method": "accel_set_options", 00:20:45.512 "params": { 00:20:45.512 "small_cache_size": 128, 00:20:45.512 "large_cache_size": 16, 00:20:45.512 "task_count": 2048, 00:20:45.512 "sequence_count": 2048, 00:20:45.512 "buf_count": 2048 00:20:45.512 } 00:20:45.512 } 00:20:45.512 ] 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "subsystem": "bdev", 00:20:45.512 "config": [ 00:20:45.512 { 00:20:45.512 "method": "bdev_set_options", 00:20:45.512 "params": { 00:20:45.512 "bdev_io_pool_size": 65535, 00:20:45.512 "bdev_io_cache_size": 256, 00:20:45.512 "bdev_auto_examine": true, 00:20:45.512 "iobuf_small_cache_size": 128, 00:20:45.512 "iobuf_large_cache_size": 16 00:20:45.512 } 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "method": "bdev_raid_set_options", 00:20:45.512 "params": { 00:20:45.512 "process_window_size_kb": 1024 00:20:45.512 } 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "method": "bdev_iscsi_set_options", 00:20:45.512 "params": { 00:20:45.512 "timeout_sec": 30 00:20:45.512 } 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "method": "bdev_nvme_set_options", 00:20:45.512 "params": { 00:20:45.512 "action_on_timeout": "none", 00:20:45.512 "timeout_us": 0, 00:20:45.512 "timeout_admin_us": 0, 00:20:45.512 "keep_alive_timeout_ms": 10000, 00:20:45.512 "transport_retry_count": 4, 00:20:45.512 "arbitration_burst": 0, 00:20:45.512 "low_priority_weight": 0, 00:20:45.512 "medium_priority_weight": 0, 00:20:45.512 "high_priority_weight": 0, 00:20:45.512 "nvme_adminq_poll_period_us": 10000, 00:20:45.512 "nvme_ioq_poll_period_us": 0, 00:20:45.512 "io_queue_requests": 0, 00:20:45.512 "delay_cmd_submit": true, 00:20:45.512 "bdev_retry_count": 3, 00:20:45.512 "transport_ack_timeout": 0, 00:20:45.512 "ctrlr_loss_timeout_sec": 0, 00:20:45.512 "reconnect_delay_sec": 0, 00:20:45.512 "fast_io_fail_timeout_sec": 0, 00:20:45.512 "generate_uuids": false, 00:20:45.512 "transport_tos": 0, 00:20:45.512 "io_path_stat": false, 00:20:45.512 "allow_accel_sequence": false 00:20:45.512 } 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "method": "bdev_nvme_set_hotplug", 00:20:45.512 "params": { 00:20:45.512 "period_us": 100000, 00:20:45.512 "enable": false 00:20:45.512 } 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "method": "bdev_malloc_create", 00:20:45.512 "params": { 00:20:45.512 "name": "malloc0", 00:20:45.512 "num_blocks": 8192, 00:20:45.512 "block_size": 4096, 00:20:45.512 "physical_block_size": 4096, 00:20:45.512 "uuid": "dd35f949-0026-46a9-bd58-119d297f4e29", 00:20:45.512 "optimal_io_boundary": 0 00:20:45.512 } 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "method": "bdev_wait_for_examine" 00:20:45.512 } 00:20:45.512 ] 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "subsystem": "nbd", 00:20:45.512 "config": [] 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "subsystem": "scheduler", 00:20:45.512 "config": [ 00:20:45.512 { 00:20:45.512 "method": "framework_set_scheduler", 00:20:45.512 "params": { 00:20:45.512 "name": "static" 00:20:45.512 } 00:20:45.512 } 00:20:45.512 ] 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "subsystem": "nvmf", 00:20:45.512 "config": [ 00:20:45.512 { 00:20:45.512 "method": "nvmf_set_config", 00:20:45.512 "params": { 00:20:45.512 "discovery_filter": "match_any", 00:20:45.512 "admin_cmd_passthru": { 00:20:45.512 "identify_ctrlr": false 00:20:45.512 } 00:20:45.512 } 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "method": "nvmf_set_max_subsystems", 00:20:45.512 "params": { 00:20:45.512 "max_subsystems": 1024 00:20:45.512 } 00:20:45.512 }, 00:20:45.512 { 00:20:45.512 "method": "nvmf_set_crdt", 00:20:45.512 "params": { 00:20:45.512 "crdt1": 0, 00:20:45.512 "crdt2": 0, 00:20:45.512 "crdt3": 0 00:20:45.512 } 00:20:45.513 }, 00:20:45.513 { 00:20:45.513 "method": "nvmf_create_transport", 00:20:45.513 "params": { 00:20:45.513 "trtype": "TCP", 00:20:45.513 "max_queue_depth": 128, 00:20:45.513 "max_io_qpairs_per_ctrlr": 127, 00:20:45.513 "in_capsule_data_size": 4096, 00:20:45.513 "max_io_size": 131072, 00:20:45.513 "io_unit_size": 131072, 00:20:45.513 "max_aq_depth": 128, 00:20:45.513 "num_shared_buffers": 511, 00:20:45.513 "buf_cache_size": 4294967295, 00:20:45.513 "dif_insert_or_strip": false, 00:20:45.513 "zcopy": false, 00:20:45.513 "c2h_success": false, 00:20:45.513 "sock_priority": 0, 00:20:45.513 "abort_timeout_sec": 1 00:20:45.513 } 00:20:45.513 }, 00:20:45.513 { 00:20:45.513 "method": "nvmf_create_subsystem", 00:20:45.513 "params": { 00:20:45.513 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.513 "allow_any_host": false, 00:20:45.513 "serial_number": "SPDK00000000000001", 00:20:45.513 "model_number": "SPDK bdev Controller", 00:20:45.513 "max_namespaces": 10, 00:20:45.513 "min_cntlid": 1, 00:20:45.513 "max_cntlid": 65519, 00:20:45.513 "ana_reporting": false 00:20:45.513 } 00:20:45.513 }, 00:20:45.513 { 00:20:45.513 "method": "nvmf_subsystem_add_host", 00:20:45.513 "params": { 00:20:45.513 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.513 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.513 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:45.513 } 00:20:45.513 }, 00:20:45.513 { 00:20:45.513 "method": "nvmf_subsystem_add_ns", 00:20:45.513 "params": { 00:20:45.513 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.513 "namespace": { 00:20:45.513 "nsid": 1, 00:20:45.513 "bdev_name": "malloc0", 00:20:45.513 "nguid": "DD35F949002646A9BD58119D297F4E29", 00:20:45.513 "uuid": "dd35f949-0026-46a9-bd58-119d297f4e29" 00:20:45.513 } 00:20:45.513 } 00:20:45.513 }, 00:20:45.513 { 00:20:45.513 "method": "nvmf_subsystem_add_listener", 00:20:45.513 "params": { 00:20:45.513 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.513 "listen_address": { 00:20:45.513 "trtype": "TCP", 00:20:45.513 "adrfam": "IPv4", 00:20:45.513 "traddr": "10.0.0.2", 00:20:45.513 "trsvcid": "4420" 00:20:45.513 }, 00:20:45.513 "secure_channel": true 00:20:45.513 } 00:20:45.513 } 00:20:45.513 ] 00:20:45.513 } 00:20:45.513 ] 00:20:45.513 }' 00:20:45.513 10:51:02 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:45.770 10:51:02 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:45.771 "subsystems": [ 00:20:45.771 { 00:20:45.771 "subsystem": "iobuf", 00:20:45.771 "config": [ 00:20:45.771 { 00:20:45.771 "method": "iobuf_set_options", 00:20:45.771 "params": { 00:20:45.771 "small_pool_count": 8192, 00:20:45.771 "large_pool_count": 1024, 00:20:45.771 "small_bufsize": 8192, 00:20:45.771 "large_bufsize": 135168 00:20:45.771 } 00:20:45.771 } 00:20:45.771 ] 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "subsystem": "sock", 00:20:45.771 "config": [ 00:20:45.771 { 00:20:45.771 "method": "sock_impl_set_options", 00:20:45.771 "params": { 00:20:45.771 "impl_name": "posix", 00:20:45.771 "recv_buf_size": 2097152, 00:20:45.771 "send_buf_size": 2097152, 00:20:45.771 "enable_recv_pipe": true, 00:20:45.771 "enable_quickack": false, 00:20:45.771 "enable_placement_id": 0, 00:20:45.771 "enable_zerocopy_send_server": true, 00:20:45.771 "enable_zerocopy_send_client": false, 00:20:45.771 "zerocopy_threshold": 0, 00:20:45.771 "tls_version": 0, 00:20:45.771 "enable_ktls": false 00:20:45.771 } 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "method": "sock_impl_set_options", 00:20:45.771 "params": { 00:20:45.771 "impl_name": "ssl", 00:20:45.771 "recv_buf_size": 4096, 00:20:45.771 "send_buf_size": 4096, 00:20:45.771 "enable_recv_pipe": true, 00:20:45.771 "enable_quickack": false, 00:20:45.771 "enable_placement_id": 0, 00:20:45.771 "enable_zerocopy_send_server": true, 00:20:45.771 "enable_zerocopy_send_client": false, 00:20:45.771 "zerocopy_threshold": 0, 00:20:45.771 "tls_version": 0, 00:20:45.771 "enable_ktls": false 00:20:45.771 } 00:20:45.771 } 00:20:45.771 ] 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "subsystem": "vmd", 00:20:45.771 "config": [] 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "subsystem": "accel", 00:20:45.771 "config": [ 00:20:45.771 { 00:20:45.771 "method": "accel_set_options", 00:20:45.771 "params": { 00:20:45.771 "small_cache_size": 128, 00:20:45.771 "large_cache_size": 16, 00:20:45.771 "task_count": 2048, 00:20:45.771 "sequence_count": 2048, 00:20:45.771 "buf_count": 2048 00:20:45.771 } 00:20:45.771 } 00:20:45.771 ] 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "subsystem": "bdev", 00:20:45.771 "config": [ 00:20:45.771 { 00:20:45.771 "method": "bdev_set_options", 00:20:45.771 "params": { 00:20:45.771 "bdev_io_pool_size": 65535, 00:20:45.771 "bdev_io_cache_size": 256, 00:20:45.771 "bdev_auto_examine": true, 00:20:45.771 "iobuf_small_cache_size": 128, 00:20:45.771 "iobuf_large_cache_size": 16 00:20:45.771 } 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "method": "bdev_raid_set_options", 00:20:45.771 "params": { 00:20:45.771 "process_window_size_kb": 1024 00:20:45.771 } 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "method": "bdev_iscsi_set_options", 00:20:45.771 "params": { 00:20:45.771 "timeout_sec": 30 00:20:45.771 } 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "method": "bdev_nvme_set_options", 00:20:45.771 "params": { 00:20:45.771 "action_on_timeout": "none", 00:20:45.771 "timeout_us": 0, 00:20:45.771 "timeout_admin_us": 0, 00:20:45.771 "keep_alive_timeout_ms": 10000, 00:20:45.771 "transport_retry_count": 4, 00:20:45.771 "arbitration_burst": 0, 00:20:45.771 "low_priority_weight": 0, 00:20:45.771 "medium_priority_weight": 0, 00:20:45.771 "high_priority_weight": 0, 00:20:45.771 "nvme_adminq_poll_period_us": 10000, 00:20:45.771 "nvme_ioq_poll_period_us": 0, 00:20:45.771 "io_queue_requests": 512, 00:20:45.771 "delay_cmd_submit": true, 00:20:45.771 "bdev_retry_count": 3, 00:20:45.771 "transport_ack_timeout": 0, 00:20:45.771 "ctrlr_loss_timeout_sec": 0, 00:20:45.771 "reconnect_delay_sec": 0, 00:20:45.771 "fast_io_fail_timeout_sec": 0, 00:20:45.771 "generate_uuids": false, 00:20:45.771 "transport_tos": 0, 00:20:45.771 "io_path_stat": false, 00:20:45.771 "allow_accel_sequence": false 00:20:45.771 } 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "method": "bdev_nvme_attach_controller", 00:20:45.771 "params": { 00:20:45.771 "name": "TLSTEST", 00:20:45.771 "trtype": "TCP", 00:20:45.771 "adrfam": "IPv4", 00:20:45.771 "traddr": "10.0.0.2", 00:20:45.771 "trsvcid": "4420", 00:20:45.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.771 "prchk_reftag": false, 00:20:45.771 "prchk_guard": false, 00:20:45.771 "ctrlr_loss_timeout_sec": 0, 00:20:45.771 "reconnect_delay_sec": 0, 00:20:45.771 "fast_io_fail_timeout_sec": 0, 00:20:45.771 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:45.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.771 "hdgst": false, 00:20:45.771 "ddgst": false 00:20:45.771 } 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "method": "bdev_nvme_set_hotplug", 00:20:45.771 "params": { 00:20:45.771 "period_us": 100000, 00:20:45.771 "enable": false 00:20:45.771 } 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "method": "bdev_wait_for_examine" 00:20:45.771 } 00:20:45.771 ] 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "subsystem": "nbd", 00:20:45.771 "config": [] 00:20:45.771 } 00:20:45.771 ] 00:20:45.771 }' 00:20:45.771 10:51:02 -- target/tls.sh@208 -- # killprocess 3487547 00:20:45.771 10:51:02 -- common/autotest_common.sh@926 -- # '[' -z 3487547 ']' 00:20:45.771 10:51:02 -- common/autotest_common.sh@930 -- # kill -0 3487547 00:20:45.771 10:51:02 -- common/autotest_common.sh@931 -- # uname 00:20:45.771 10:51:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:45.771 10:51:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3487547 00:20:45.771 10:51:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:45.771 10:51:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:45.771 10:51:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3487547' 00:20:45.771 killing process with pid 3487547 00:20:45.771 10:51:02 -- common/autotest_common.sh@945 -- # kill 3487547 00:20:45.771 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.771 00:20:45.771 Latency(us) 00:20:45.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.771 =================================================================================================================== 00:20:45.771 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.771 10:51:02 -- common/autotest_common.sh@950 -- # wait 3487547 00:20:46.029 10:51:02 -- target/tls.sh@209 -- # killprocess 3487209 00:20:46.029 10:51:02 -- common/autotest_common.sh@926 -- # '[' -z 3487209 ']' 00:20:46.029 10:51:02 -- common/autotest_common.sh@930 -- # kill -0 3487209 00:20:46.029 10:51:02 -- common/autotest_common.sh@931 -- # uname 00:20:46.029 10:51:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:46.029 10:51:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3487209 00:20:46.029 10:51:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:46.029 10:51:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:46.029 10:51:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3487209' 00:20:46.029 killing process with pid 3487209 00:20:46.029 10:51:02 -- common/autotest_common.sh@945 -- # kill 3487209 00:20:46.029 10:51:02 -- common/autotest_common.sh@950 -- # wait 3487209 00:20:46.286 10:51:02 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:46.286 10:51:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:46.286 10:51:02 -- target/tls.sh@212 -- # echo '{ 00:20:46.286 "subsystems": [ 00:20:46.286 { 00:20:46.286 "subsystem": "iobuf", 00:20:46.286 "config": [ 00:20:46.286 { 00:20:46.286 "method": "iobuf_set_options", 00:20:46.286 "params": { 00:20:46.286 "small_pool_count": 8192, 00:20:46.286 "large_pool_count": 1024, 00:20:46.286 "small_bufsize": 8192, 00:20:46.286 "large_bufsize": 135168 00:20:46.286 } 00:20:46.286 } 00:20:46.286 ] 00:20:46.286 }, 00:20:46.286 { 00:20:46.286 "subsystem": "sock", 00:20:46.286 "config": [ 00:20:46.286 { 00:20:46.286 "method": "sock_impl_set_options", 00:20:46.286 "params": { 00:20:46.286 "impl_name": "posix", 00:20:46.286 "recv_buf_size": 2097152, 00:20:46.286 "send_buf_size": 2097152, 00:20:46.286 "enable_recv_pipe": true, 00:20:46.286 "enable_quickack": false, 00:20:46.286 "enable_placement_id": 0, 00:20:46.286 "enable_zerocopy_send_server": true, 00:20:46.286 "enable_zerocopy_send_client": false, 00:20:46.286 "zerocopy_threshold": 0, 00:20:46.286 "tls_version": 0, 00:20:46.286 "enable_ktls": false 00:20:46.286 } 00:20:46.286 }, 00:20:46.286 { 00:20:46.286 "method": "sock_impl_set_options", 00:20:46.286 "params": { 00:20:46.286 "impl_name": "ssl", 00:20:46.286 "recv_buf_size": 4096, 00:20:46.286 "send_buf_size": 4096, 00:20:46.286 "enable_recv_pipe": true, 00:20:46.286 "enable_quickack": false, 00:20:46.286 "enable_placement_id": 0, 00:20:46.286 "enable_zerocopy_send_server": true, 00:20:46.286 "enable_zerocopy_send_client": false, 00:20:46.286 "zerocopy_threshold": 0, 00:20:46.286 "tls_version": 0, 00:20:46.286 "enable_ktls": false 00:20:46.286 } 00:20:46.286 } 00:20:46.286 ] 00:20:46.286 }, 00:20:46.286 { 00:20:46.286 "subsystem": "vmd", 00:20:46.286 "config": [] 00:20:46.286 }, 00:20:46.286 { 00:20:46.286 "subsystem": "accel", 00:20:46.286 "config": [ 00:20:46.286 { 00:20:46.286 "method": "accel_set_options", 00:20:46.286 "params": { 00:20:46.286 "small_cache_size": 128, 00:20:46.286 "large_cache_size": 16, 00:20:46.286 "task_count": 2048, 00:20:46.286 "sequence_count": 2048, 00:20:46.286 "buf_count": 2048 00:20:46.286 } 00:20:46.286 } 00:20:46.286 ] 00:20:46.286 }, 00:20:46.286 { 00:20:46.286 "subsystem": "bdev", 00:20:46.286 "config": [ 00:20:46.286 { 00:20:46.286 "method": "bdev_set_options", 00:20:46.286 "params": { 00:20:46.286 "bdev_io_pool_size": 65535, 00:20:46.287 "bdev_io_cache_size": 256, 00:20:46.287 "bdev_auto_examine": true, 00:20:46.287 "iobuf_small_cache_size": 128, 00:20:46.287 "iobuf_large_cache_size": 16 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "bdev_raid_set_options", 00:20:46.287 "params": { 00:20:46.287 "process_window_size_kb": 1024 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "bdev_iscsi_set_options", 00:20:46.287 "params": { 00:20:46.287 "timeout_sec": 30 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "bdev_nvme_set_options", 00:20:46.287 "params": { 00:20:46.287 "action_on_timeout": "none", 00:20:46.287 "timeout_us": 0, 00:20:46.287 "timeout_admin_us": 0, 00:20:46.287 "keep_alive_timeout_ms": 10000, 00:20:46.287 "transport_retry_count": 4, 00:20:46.287 "arbitration_burst": 0, 00:20:46.287 "low_priority_weight": 0, 00:20:46.287 "medium_priority_weight": 0, 00:20:46.287 "high_priority_weight": 0, 00:20:46.287 "nvme_adminq_poll_period_us": 10000, 00:20:46.287 "nvme_ioq_poll_period_us": 0, 00:20:46.287 "io_queue_requests": 0, 00:20:46.287 "delay_cmd_submit": true, 00:20:46.287 "bdev_retry_count": 3, 00:20:46.287 "transport_ack_timeout": 0, 00:20:46.287 "ctrlr_loss_timeout_sec": 0, 00:20:46.287 "reconnect_delay_sec": 0, 00:20:46.287 "fast_io_fail_timeout_sec": 0, 00:20:46.287 "generate_uuids": false, 00:20:46.287 "transport_tos": 0, 00:20:46.287 "io_path_stat": false, 00:20:46.287 "allow_accel_sequence": false 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "bdev_nvme_set_hotplug", 00:20:46.287 "params": { 00:20:46.287 "period_us": 100000, 00:20:46.287 "enable": false 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "bdev_malloc_create", 00:20:46.287 "params": { 00:20:46.287 "name": "malloc0", 00:20:46.287 "num_blocks": 8192, 00:20:46.287 "block_size": 4096, 00:20:46.287 "physical_block_size": 4096, 00:20:46.287 "uuid": "dd35f949-0026-46a9-bd58-119d297f4e29", 00:20:46.287 "optimal_io_boundary": 0 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "bdev_wait_for_examine" 00:20:46.287 } 00:20:46.287 ] 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "subsystem": "nbd", 00:20:46.287 "config": [] 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "subsystem": "scheduler", 00:20:46.287 "config": [ 00:20:46.287 { 00:20:46.287 "method": "framework_set_scheduler", 00:20:46.287 "params": { 00:20:46.287 "name": "static" 00:20:46.287 } 00:20:46.287 } 00:20:46.287 ] 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "subsystem": "nvmf", 00:20:46.287 "config": [ 00:20:46.287 { 00:20:46.287 "method": "nvmf_set_config", 00:20:46.287 "params": { 00:20:46.287 "discovery_filter": "match_any", 00:20:46.287 "admin_cmd_passthru": { 00:20:46.287 "identify_ctrlr": false 00:20:46.287 } 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "nvmf_set_max_subsystems", 00:20:46.287 "params": { 00:20:46.287 "max_subsystems": 1024 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "nvmf_set_crdt", 00:20:46.287 "params": { 00:20:46.287 "crdt1": 0, 00:20:46.287 "crdt2": 0, 00:20:46.287 "crdt3": 0 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "nvmf_create_transport", 00:20:46.287 "params": { 00:20:46.287 "trtype": "TCP", 00:20:46.287 "max_queue_depth": 128, 00:20:46.287 "max_io_qpairs_per_ctrlr": 127, 00:20:46.287 "in_capsule_data_size": 4096, 00:20:46.287 "max_io_size": 131072, 00:20:46.287 "io_unit_size": 131072, 00:20:46.287 "max_aq_depth": 128, 00:20:46.287 "num_shared_buffers": 511, 00:20:46.287 "buf_cache_size": 4294967295, 00:20:46.287 "dif_insert_or_strip": false, 00:20:46.287 "zcopy": false, 00:20:46.287 "c2h_success": false, 00:20:46.287 "sock_priority": 0, 00:20:46.287 "abort_timeout_sec": 1 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "nvmf_create_subsystem", 00:20:46.287 "params": { 00:20:46.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.287 "allow_any_host": false, 00:20:46.287 "serial_number": "SPDK00000000000001", 00:20:46.287 "model_number": "SPDK bdev Controller", 00:20:46.287 "max_namespaces": 10, 00:20:46.287 "min_cntlid": 1, 00:20:46.287 "max_cntlid": 65519, 00:20:46.287 "ana_reporting": false 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "nvmf_subsystem_add_host", 00:20:46.287 "params": { 00:20:46.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.287 "host": "nqn.2016-06.io.spdk:host1", 00:20:46.287 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "nvmf_subsystem_add_ns", 00:20:46.287 "params": { 00:20:46.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.287 "namespace": { 00:20:46.287 "nsid": 1, 00:20:46.287 "bdev_name": "malloc0", 00:20:46.287 "nguid": "DD35F949002646A9BD58119D297F4E29", 00:20:46.287 "uuid": "dd35f949-0026-46a9-bd58-119d297f4e29" 00:20:46.287 } 00:20:46.287 } 00:20:46.287 }, 00:20:46.287 { 00:20:46.287 "method": "nvmf_subsystem_add_listener", 00:20:46.287 "params": { 00:20:46.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.287 "listen_address": { 00:20:46.287 "trtype": "TCP", 00:20:46.287 "adrfam": "IPv4", 00:20:46.287 "traddr": "10.0.0.2", 00:20:46.287 "trsvcid": "4420" 00:20:46.287 }, 00:20:46.287 "secure_channel": true 00:20:46.287 } 00:20:46.287 } 00:20:46.287 ] 00:20:46.287 } 00:20:46.287 ] 00:20:46.287 }' 00:20:46.287 10:51:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:46.287 10:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:46.287 10:51:02 -- nvmf/common.sh@469 -- # nvmfpid=3487916 00:20:46.287 10:51:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:46.287 10:51:02 -- nvmf/common.sh@470 -- # waitforlisten 3487916 00:20:46.287 10:51:02 -- common/autotest_common.sh@819 -- # '[' -z 3487916 ']' 00:20:46.287 10:51:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.287 10:51:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:46.287 10:51:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.287 10:51:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:46.287 10:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:46.287 [2024-07-10 10:51:02.938915] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:46.287 [2024-07-10 10:51:02.938989] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.287 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.287 [2024-07-10 10:51:03.006252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.287 [2024-07-10 10:51:03.094078] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:46.287 [2024-07-10 10:51:03.094224] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.287 [2024-07-10 10:51:03.094241] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.287 [2024-07-10 10:51:03.094253] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.287 [2024-07-10 10:51:03.094278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.544 [2024-07-10 10:51:03.321296] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.544 [2024-07-10 10:51:03.353328] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.544 [2024-07-10 10:51:03.353546] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.108 10:51:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:47.108 10:51:03 -- common/autotest_common.sh@852 -- # return 0 00:20:47.108 10:51:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:47.108 10:51:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:47.108 10:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:47.109 10:51:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.109 10:51:03 -- target/tls.sh@216 -- # bdevperf_pid=3488072 00:20:47.109 10:51:03 -- target/tls.sh@217 -- # waitforlisten 3488072 /var/tmp/bdevperf.sock 00:20:47.109 10:51:03 -- common/autotest_common.sh@819 -- # '[' -z 3488072 ']' 00:20:47.109 10:51:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.109 10:51:03 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:47.109 10:51:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:47.109 10:51:03 -- target/tls.sh@213 -- # echo '{ 00:20:47.109 "subsystems": [ 00:20:47.109 { 00:20:47.109 "subsystem": "iobuf", 00:20:47.109 "config": [ 00:20:47.109 { 00:20:47.109 "method": "iobuf_set_options", 00:20:47.109 "params": { 00:20:47.109 "small_pool_count": 8192, 00:20:47.109 "large_pool_count": 1024, 00:20:47.109 "small_bufsize": 8192, 00:20:47.109 "large_bufsize": 135168 00:20:47.109 } 00:20:47.109 } 00:20:47.109 ] 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "subsystem": "sock", 00:20:47.109 "config": [ 00:20:47.109 { 00:20:47.109 "method": "sock_impl_set_options", 00:20:47.109 "params": { 00:20:47.109 "impl_name": "posix", 00:20:47.109 "recv_buf_size": 2097152, 00:20:47.109 "send_buf_size": 2097152, 00:20:47.109 "enable_recv_pipe": true, 00:20:47.109 "enable_quickack": false, 00:20:47.109 "enable_placement_id": 0, 00:20:47.109 "enable_zerocopy_send_server": true, 00:20:47.109 "enable_zerocopy_send_client": false, 00:20:47.109 "zerocopy_threshold": 0, 00:20:47.109 "tls_version": 0, 00:20:47.109 "enable_ktls": false 00:20:47.109 } 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "method": "sock_impl_set_options", 00:20:47.109 "params": { 00:20:47.109 "impl_name": "ssl", 00:20:47.109 "recv_buf_size": 4096, 00:20:47.109 "send_buf_size": 4096, 00:20:47.109 "enable_recv_pipe": true, 00:20:47.109 "enable_quickack": false, 00:20:47.109 "enable_placement_id": 0, 00:20:47.109 "enable_zerocopy_send_server": true, 00:20:47.109 "enable_zerocopy_send_client": false, 00:20:47.109 "zerocopy_threshold": 0, 00:20:47.109 "tls_version": 0, 00:20:47.109 "enable_ktls": false 00:20:47.109 } 00:20:47.109 } 00:20:47.109 ] 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "subsystem": "vmd", 00:20:47.109 "config": [] 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "subsystem": "accel", 00:20:47.109 "config": [ 00:20:47.109 { 00:20:47.109 "method": "accel_set_options", 00:20:47.109 "params": { 00:20:47.109 "small_cache_size": 128, 00:20:47.109 "large_cache_size": 16, 00:20:47.109 "task_count": 2048, 00:20:47.109 "sequence_count": 2048, 00:20:47.109 "buf_count": 2048 00:20:47.109 } 00:20:47.109 } 00:20:47.109 ] 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "subsystem": "bdev", 00:20:47.109 "config": [ 00:20:47.109 { 00:20:47.109 "method": "bdev_set_options", 00:20:47.109 "params": { 00:20:47.109 "bdev_io_pool_size": 65535, 00:20:47.109 "bdev_io_cache_size": 256, 00:20:47.109 "bdev_auto_examine": true, 00:20:47.109 "iobuf_small_cache_size": 128, 00:20:47.109 "iobuf_large_cache_size": 16 00:20:47.109 } 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "method": "bdev_raid_set_options", 00:20:47.109 "params": { 00:20:47.109 "process_window_size_kb": 1024 00:20:47.109 } 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "method": "bdev_iscsi_set_options", 00:20:47.109 "params": { 00:20:47.109 "timeout_sec": 30 00:20:47.109 } 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "method": "bdev_nvme_set_options", 00:20:47.109 "params": { 00:20:47.109 "action_on_timeout": "none", 00:20:47.109 "timeout_us": 0, 00:20:47.109 "timeout_admin_us": 0, 00:20:47.109 "keep_alive_timeout_ms": 10000, 00:20:47.109 "transport_retry_count": 4, 00:20:47.109 "arbitration_burst": 0, 00:20:47.109 "low_priority_weight": 0, 00:20:47.109 "medium_priority_weight": 0, 00:20:47.109 "high_priority_weight": 0, 00:20:47.109 "nvme_adminq_poll_period_us": 10000, 00:20:47.109 "nvme_ioq_poll_period_us": 0, 00:20:47.109 "io_queue_requests": 512, 00:20:47.109 "delay_cmd_submit": true, 00:20:47.109 "bdev_retry_count": 3, 00:20:47.109 "transport_ack_timeout": 0, 00:20:47.109 "ctrlr_loss_timeout_sec": 0, 00:20:47.109 "reconnect_delay_sec": 0, 00:20:47.109 "fast_io_fail_timeout_sec": 0, 00:20:47.109 "generate_uuids": false, 00:20:47.109 "transport_tos": 0, 00:20:47.109 "io_path_stat": false, 00:20:47.109 "allow_accel_sequence": false 00:20:47.109 } 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "method": "bdev_nvme_attach_controller", 00:20:47.109 "params": { 00:20:47.109 "name": "TLSTEST", 00:20:47.109 "trtype": "TCP", 00:20:47.109 "adrfam": "IPv4", 00:20:47.109 "traddr": "10.0.0.2", 00:20:47.109 "trsvcid": "4420", 00:20:47.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.109 "prchk_reftag": false, 00:20:47.109 "prchk_guard": false, 00:20:47.109 "ctrlr_loss_timeout_sec": 0, 00:20:47.109 "reconnect_delay_sec": 0, 00:20:47.109 "fast_io_fail_timeout_sec": 0, 00:20:47.109 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:47.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.109 "hdgst": 10:51:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.109 false, 00:20:47.109 "ddgst": false 00:20:47.109 } 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "method": "bdev_nvme_set_hotplug", 00:20:47.109 "params": { 00:20:47.109 "period_us": 100000, 00:20:47.109 "enable": false 00:20:47.109 } 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "method": "bdev_wait_for_examine" 00:20:47.109 } 00:20:47.109 ] 00:20:47.109 }, 00:20:47.109 { 00:20:47.109 "subsystem": "nbd", 00:20:47.109 "config": [] 00:20:47.109 } 00:20:47.109 ] 00:20:47.109 }' 00:20:47.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.109 10:51:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:47.109 10:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:47.367 [2024-07-10 10:51:03.950025] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:47.367 [2024-07-10 10:51:03.950101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488072 ] 00:20:47.367 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.367 [2024-07-10 10:51:04.007365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.367 [2024-07-10 10:51:04.091362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.625 [2024-07-10 10:51:04.250908] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.190 10:51:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:48.190 10:51:04 -- common/autotest_common.sh@852 -- # return 0 00:20:48.190 10:51:04 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:48.190 Running I/O for 10 seconds... 00:21:00.387 00:21:00.387 Latency(us) 00:21:00.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.387 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:00.387 Verification LBA range: start 0x0 length 0x2000 00:21:00.387 TLSTESTn1 : 10.02 2775.87 10.84 0.00 0.00 46052.74 9077.95 50875.35 00:21:00.387 =================================================================================================================== 00:21:00.387 Total : 2775.87 10.84 0.00 0.00 46052.74 9077.95 50875.35 00:21:00.387 0 00:21:00.387 10:51:15 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:00.387 10:51:15 -- target/tls.sh@223 -- # killprocess 3488072 00:21:00.387 10:51:15 -- common/autotest_common.sh@926 -- # '[' -z 3488072 ']' 00:21:00.387 10:51:15 -- common/autotest_common.sh@930 -- # kill -0 3488072 00:21:00.387 10:51:15 -- common/autotest_common.sh@931 -- # uname 00:21:00.387 10:51:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:00.387 10:51:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3488072 00:21:00.387 10:51:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:00.387 10:51:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:00.387 10:51:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3488072' 00:21:00.387 killing process with pid 3488072 00:21:00.387 10:51:15 -- common/autotest_common.sh@945 -- # kill 3488072 00:21:00.387 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.387 00:21:00.387 Latency(us) 00:21:00.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.387 =================================================================================================================== 00:21:00.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.387 10:51:15 -- common/autotest_common.sh@950 -- # wait 3488072 00:21:00.387 10:51:15 -- target/tls.sh@224 -- # killprocess 3487916 00:21:00.387 10:51:15 -- common/autotest_common.sh@926 -- # '[' -z 3487916 ']' 00:21:00.387 10:51:15 -- common/autotest_common.sh@930 -- # kill -0 3487916 00:21:00.387 10:51:15 -- common/autotest_common.sh@931 -- # uname 00:21:00.387 10:51:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:00.387 10:51:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3487916 00:21:00.387 10:51:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:00.387 10:51:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:00.387 10:51:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3487916' 00:21:00.387 killing process with pid 3487916 00:21:00.387 10:51:15 -- common/autotest_common.sh@945 -- # kill 3487916 00:21:00.387 10:51:15 -- common/autotest_common.sh@950 -- # wait 3487916 00:21:00.387 10:51:15 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:21:00.387 10:51:15 -- target/tls.sh@227 -- # cleanup 00:21:00.387 10:51:15 -- target/tls.sh@15 -- # process_shm --id 0 00:21:00.387 10:51:15 -- common/autotest_common.sh@796 -- # type=--id 00:21:00.387 10:51:15 -- common/autotest_common.sh@797 -- # id=0 00:21:00.387 10:51:15 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:00.387 10:51:15 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:00.387 10:51:15 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:00.387 10:51:15 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:00.387 10:51:15 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:00.387 10:51:15 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:00.387 nvmf_trace.0 00:21:00.387 10:51:15 -- common/autotest_common.sh@811 -- # return 0 00:21:00.387 10:51:15 -- target/tls.sh@16 -- # killprocess 3488072 00:21:00.387 10:51:15 -- common/autotest_common.sh@926 -- # '[' -z 3488072 ']' 00:21:00.387 10:51:15 -- common/autotest_common.sh@930 -- # kill -0 3488072 00:21:00.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3488072) - No such process 00:21:00.387 10:51:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3488072 is not found' 00:21:00.387 Process with pid 3488072 is not found 00:21:00.387 10:51:15 -- target/tls.sh@17 -- # nvmftestfini 00:21:00.387 10:51:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:00.387 10:51:15 -- nvmf/common.sh@116 -- # sync 00:21:00.387 10:51:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:00.387 10:51:15 -- nvmf/common.sh@119 -- # set +e 00:21:00.387 10:51:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:00.387 10:51:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:00.387 rmmod nvme_tcp 00:21:00.387 rmmod nvme_fabrics 00:21:00.387 rmmod nvme_keyring 00:21:00.387 10:51:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:00.387 10:51:15 -- nvmf/common.sh@123 -- # set -e 00:21:00.387 10:51:15 -- nvmf/common.sh@124 -- # return 0 00:21:00.387 10:51:15 -- nvmf/common.sh@477 -- # '[' -n 3487916 ']' 00:21:00.387 10:51:15 -- nvmf/common.sh@478 -- # killprocess 3487916 00:21:00.387 10:51:15 -- common/autotest_common.sh@926 -- # '[' -z 3487916 ']' 00:21:00.387 10:51:15 -- common/autotest_common.sh@930 -- # kill -0 3487916 00:21:00.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3487916) - No such process 00:21:00.387 10:51:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3487916 is not found' 00:21:00.387 Process with pid 3487916 is not found 00:21:00.387 10:51:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:00.387 10:51:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:00.387 10:51:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:00.387 10:51:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.387 10:51:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:00.387 10:51:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.387 10:51:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.387 10:51:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.953 10:51:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:00.953 10:51:17 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:00.953 00:21:00.953 real 1m13.952s 00:21:00.953 user 1m50.381s 00:21:00.953 sys 0m27.044s 00:21:00.953 10:51:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.953 10:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.953 ************************************ 00:21:00.953 END TEST nvmf_tls 00:21:00.953 ************************************ 00:21:00.953 10:51:17 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:00.953 10:51:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:00.953 10:51:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:00.953 10:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.953 ************************************ 00:21:00.953 START TEST nvmf_fips 00:21:00.953 ************************************ 00:21:00.953 10:51:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:01.212 * Looking for test storage... 00:21:01.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:01.212 10:51:17 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.212 10:51:17 -- nvmf/common.sh@7 -- # uname -s 00:21:01.212 10:51:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.212 10:51:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.212 10:51:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.212 10:51:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.212 10:51:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.212 10:51:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.212 10:51:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.212 10:51:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.212 10:51:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.212 10:51:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.212 10:51:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.212 10:51:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.212 10:51:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.212 10:51:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.212 10:51:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.212 10:51:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.212 10:51:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.212 10:51:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.212 10:51:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.212 10:51:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.212 10:51:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.212 10:51:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.212 10:51:17 -- paths/export.sh@5 -- # export PATH 00:21:01.212 10:51:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.212 10:51:17 -- nvmf/common.sh@46 -- # : 0 00:21:01.212 10:51:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:01.212 10:51:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:01.212 10:51:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:01.212 10:51:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.212 10:51:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.212 10:51:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:01.212 10:51:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:01.212 10:51:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:01.212 10:51:17 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:01.212 10:51:17 -- fips/fips.sh@89 -- # check_openssl_version 00:21:01.212 10:51:17 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:01.212 10:51:17 -- fips/fips.sh@85 -- # openssl version 00:21:01.212 10:51:17 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:01.212 10:51:17 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:01.212 10:51:17 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:01.212 10:51:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:01.212 10:51:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:01.212 10:51:17 -- scripts/common.sh@335 -- # IFS=.-: 00:21:01.212 10:51:17 -- scripts/common.sh@335 -- # read -ra ver1 00:21:01.212 10:51:17 -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.212 10:51:17 -- scripts/common.sh@336 -- # read -ra ver2 00:21:01.212 10:51:17 -- scripts/common.sh@337 -- # local 'op=>=' 00:21:01.212 10:51:17 -- scripts/common.sh@339 -- # ver1_l=3 00:21:01.212 10:51:17 -- scripts/common.sh@340 -- # ver2_l=3 00:21:01.212 10:51:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:01.212 10:51:17 -- scripts/common.sh@343 -- # case "$op" in 00:21:01.212 10:51:17 -- scripts/common.sh@347 -- # : 1 00:21:01.212 10:51:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:01.212 10:51:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.212 10:51:17 -- scripts/common.sh@364 -- # decimal 3 00:21:01.212 10:51:17 -- scripts/common.sh@352 -- # local d=3 00:21:01.212 10:51:17 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:01.212 10:51:17 -- scripts/common.sh@354 -- # echo 3 00:21:01.212 10:51:17 -- scripts/common.sh@364 -- # ver1[v]=3 00:21:01.212 10:51:17 -- scripts/common.sh@365 -- # decimal 3 00:21:01.212 10:51:17 -- scripts/common.sh@352 -- # local d=3 00:21:01.212 10:51:17 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:01.212 10:51:17 -- scripts/common.sh@354 -- # echo 3 00:21:01.212 10:51:17 -- scripts/common.sh@365 -- # ver2[v]=3 00:21:01.212 10:51:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:01.212 10:51:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:01.212 10:51:17 -- scripts/common.sh@363 -- # (( v++ )) 00:21:01.212 10:51:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.212 10:51:17 -- scripts/common.sh@364 -- # decimal 0 00:21:01.212 10:51:17 -- scripts/common.sh@352 -- # local d=0 00:21:01.212 10:51:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:01.212 10:51:17 -- scripts/common.sh@354 -- # echo 0 00:21:01.212 10:51:17 -- scripts/common.sh@364 -- # ver1[v]=0 00:21:01.212 10:51:17 -- scripts/common.sh@365 -- # decimal 0 00:21:01.212 10:51:17 -- scripts/common.sh@352 -- # local d=0 00:21:01.212 10:51:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:01.212 10:51:17 -- scripts/common.sh@354 -- # echo 0 00:21:01.212 10:51:17 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:01.212 10:51:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:01.212 10:51:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:01.212 10:51:17 -- scripts/common.sh@363 -- # (( v++ )) 00:21:01.212 10:51:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.212 10:51:17 -- scripts/common.sh@364 -- # decimal 9 00:21:01.212 10:51:17 -- scripts/common.sh@352 -- # local d=9 00:21:01.212 10:51:17 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:01.212 10:51:17 -- scripts/common.sh@354 -- # echo 9 00:21:01.212 10:51:17 -- scripts/common.sh@364 -- # ver1[v]=9 00:21:01.212 10:51:17 -- scripts/common.sh@365 -- # decimal 0 00:21:01.212 10:51:17 -- scripts/common.sh@352 -- # local d=0 00:21:01.212 10:51:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:01.212 10:51:17 -- scripts/common.sh@354 -- # echo 0 00:21:01.212 10:51:17 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:01.212 10:51:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:01.212 10:51:17 -- scripts/common.sh@366 -- # return 0 00:21:01.212 10:51:17 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:01.212 10:51:17 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:01.212 10:51:17 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:01.212 10:51:17 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:01.212 10:51:17 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:01.212 10:51:17 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:01.212 10:51:17 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:01.212 10:51:17 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:01.212 10:51:17 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:01.212 10:51:17 -- fips/fips.sh@114 -- # build_openssl_config 00:21:01.212 10:51:17 -- fips/fips.sh@37 -- # cat 00:21:01.212 10:51:17 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:01.212 10:51:17 -- fips/fips.sh@58 -- # cat - 00:21:01.212 10:51:17 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:01.212 10:51:17 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:01.212 10:51:17 -- fips/fips.sh@117 -- # mapfile -t providers 00:21:01.212 10:51:17 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:21:01.212 10:51:17 -- fips/fips.sh@117 -- # openssl list -providers 00:21:01.212 10:51:17 -- fips/fips.sh@117 -- # grep name 00:21:01.212 10:51:17 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:01.212 10:51:17 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:01.212 10:51:17 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:01.212 10:51:17 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:01.212 10:51:17 -- fips/fips.sh@128 -- # : 00:21:01.212 10:51:17 -- common/autotest_common.sh@640 -- # local es=0 00:21:01.212 10:51:17 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:01.212 10:51:17 -- common/autotest_common.sh@628 -- # local arg=openssl 00:21:01.213 10:51:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.213 10:51:17 -- common/autotest_common.sh@632 -- # type -t openssl 00:21:01.213 10:51:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.213 10:51:17 -- common/autotest_common.sh@634 -- # type -P openssl 00:21:01.213 10:51:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.213 10:51:17 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:21:01.213 10:51:17 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:21:01.213 10:51:17 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:21:01.213 Error setting digest 00:21:01.213 0022926C597F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:01.213 0022926C597F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:01.213 10:51:17 -- common/autotest_common.sh@643 -- # es=1 00:21:01.213 10:51:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:01.213 10:51:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:01.213 10:51:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:01.213 10:51:17 -- fips/fips.sh@131 -- # nvmftestinit 00:21:01.213 10:51:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:01.213 10:51:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.213 10:51:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:01.213 10:51:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:01.213 10:51:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:01.213 10:51:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.213 10:51:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.213 10:51:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.213 10:51:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:01.213 10:51:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:01.213 10:51:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:01.213 10:51:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.113 10:51:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:03.113 10:51:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:03.113 10:51:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:03.113 10:51:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:03.113 10:51:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:03.113 10:51:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:03.113 10:51:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:03.113 10:51:19 -- nvmf/common.sh@294 -- # net_devs=() 00:21:03.113 10:51:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:03.113 10:51:19 -- nvmf/common.sh@295 -- # e810=() 00:21:03.113 10:51:19 -- nvmf/common.sh@295 -- # local -ga e810 00:21:03.113 10:51:19 -- nvmf/common.sh@296 -- # x722=() 00:21:03.113 10:51:19 -- nvmf/common.sh@296 -- # local -ga x722 00:21:03.113 10:51:19 -- nvmf/common.sh@297 -- # mlx=() 00:21:03.113 10:51:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:03.113 10:51:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.113 10:51:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.113 10:51:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.113 10:51:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.113 10:51:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.113 10:51:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.113 10:51:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.113 10:51:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.113 10:51:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.113 10:51:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.113 10:51:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.113 10:51:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:03.113 10:51:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:03.113 10:51:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:03.113 10:51:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:03.113 10:51:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:03.113 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:03.113 10:51:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:03.113 10:51:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:03.113 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:03.113 10:51:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:03.113 10:51:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:03.113 10:51:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.113 10:51:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:03.113 10:51:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.113 10:51:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:03.113 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:03.113 10:51:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.113 10:51:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:03.113 10:51:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.113 10:51:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:03.113 10:51:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.113 10:51:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:03.113 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:03.113 10:51:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.113 10:51:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:03.113 10:51:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:03.113 10:51:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:03.113 10:51:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:03.113 10:51:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.113 10:51:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.113 10:51:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.113 10:51:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:03.113 10:51:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.113 10:51:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.113 10:51:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:03.113 10:51:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.113 10:51:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.113 10:51:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:03.113 10:51:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:03.113 10:51:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.113 10:51:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.113 10:51:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.113 10:51:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.113 10:51:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:03.113 10:51:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.371 10:51:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.371 10:51:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.371 10:51:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:03.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:21:03.371 00:21:03.371 --- 10.0.0.2 ping statistics --- 00:21:03.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.372 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:21:03.372 10:51:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:21:03.372 00:21:03.372 --- 10.0.0.1 ping statistics --- 00:21:03.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.372 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:21:03.372 10:51:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.372 10:51:19 -- nvmf/common.sh@410 -- # return 0 00:21:03.372 10:51:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:03.372 10:51:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.372 10:51:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:03.372 10:51:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:03.372 10:51:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.372 10:51:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:03.372 10:51:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:03.372 10:51:19 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:03.372 10:51:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:03.372 10:51:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:03.372 10:51:19 -- common/autotest_common.sh@10 -- # set +x 00:21:03.372 10:51:19 -- nvmf/common.sh@469 -- # nvmfpid=3491973 00:21:03.372 10:51:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:03.372 10:51:19 -- nvmf/common.sh@470 -- # waitforlisten 3491973 00:21:03.372 10:51:19 -- common/autotest_common.sh@819 -- # '[' -z 3491973 ']' 00:21:03.372 10:51:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.372 10:51:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:03.372 10:51:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.372 10:51:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:03.372 10:51:19 -- common/autotest_common.sh@10 -- # set +x 00:21:03.372 [2024-07-10 10:51:20.074688] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:03.372 [2024-07-10 10:51:20.074811] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.372 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.372 [2024-07-10 10:51:20.142524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.630 [2024-07-10 10:51:20.231435] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:03.630 [2024-07-10 10:51:20.231612] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.630 [2024-07-10 10:51:20.231629] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.630 [2024-07-10 10:51:20.231641] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.630 [2024-07-10 10:51:20.231677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.196 10:51:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:04.196 10:51:21 -- common/autotest_common.sh@852 -- # return 0 00:21:04.196 10:51:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:04.196 10:51:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:04.196 10:51:21 -- common/autotest_common.sh@10 -- # set +x 00:21:04.453 10:51:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.453 10:51:21 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:04.453 10:51:21 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:04.453 10:51:21 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:04.453 10:51:21 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:04.453 10:51:21 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:04.453 10:51:21 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:04.453 10:51:21 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:04.454 10:51:21 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:04.454 [2024-07-10 10:51:21.272584] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.738 [2024-07-10 10:51:21.288573] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.738 [2024-07-10 10:51:21.288791] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.738 malloc0 00:21:04.738 10:51:21 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.738 10:51:21 -- fips/fips.sh@148 -- # bdevperf_pid=3492190 00:21:04.738 10:51:21 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.738 10:51:21 -- fips/fips.sh@149 -- # waitforlisten 3492190 /var/tmp/bdevperf.sock 00:21:04.738 10:51:21 -- common/autotest_common.sh@819 -- # '[' -z 3492190 ']' 00:21:04.738 10:51:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.738 10:51:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:04.738 10:51:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.738 10:51:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:04.738 10:51:21 -- common/autotest_common.sh@10 -- # set +x 00:21:04.738 [2024-07-10 10:51:21.405054] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:04.738 [2024-07-10 10:51:21.405138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492190 ] 00:21:04.738 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.738 [2024-07-10 10:51:21.464473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.738 [2024-07-10 10:51:21.554652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.669 10:51:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:05.669 10:51:22 -- common/autotest_common.sh@852 -- # return 0 00:21:05.669 10:51:22 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:05.926 [2024-07-10 10:51:22.588239] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.926 TLSTESTn1 00:21:05.926 10:51:22 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:06.183 Running I/O for 10 seconds... 00:21:16.139 00:21:16.139 Latency(us) 00:21:16.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.139 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:16.139 Verification LBA range: start 0x0 length 0x2000 00:21:16.139 TLSTESTn1 : 10.03 2844.38 11.11 0.00 0.00 44928.19 7475.96 59807.67 00:21:16.139 =================================================================================================================== 00:21:16.139 Total : 2844.38 11.11 0.00 0.00 44928.19 7475.96 59807.67 00:21:16.139 0 00:21:16.139 10:51:32 -- fips/fips.sh@1 -- # cleanup 00:21:16.139 10:51:32 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:16.139 10:51:32 -- common/autotest_common.sh@796 -- # type=--id 00:21:16.139 10:51:32 -- common/autotest_common.sh@797 -- # id=0 00:21:16.139 10:51:32 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:16.139 10:51:32 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:16.139 10:51:32 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:16.139 10:51:32 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:16.139 10:51:32 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:16.139 10:51:32 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:16.139 nvmf_trace.0 00:21:16.139 10:51:32 -- common/autotest_common.sh@811 -- # return 0 00:21:16.139 10:51:32 -- fips/fips.sh@16 -- # killprocess 3492190 00:21:16.139 10:51:32 -- common/autotest_common.sh@926 -- # '[' -z 3492190 ']' 00:21:16.139 10:51:32 -- common/autotest_common.sh@930 -- # kill -0 3492190 00:21:16.139 10:51:32 -- common/autotest_common.sh@931 -- # uname 00:21:16.139 10:51:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:16.139 10:51:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3492190 00:21:16.139 10:51:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:16.139 10:51:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:16.139 10:51:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3492190' 00:21:16.139 killing process with pid 3492190 00:21:16.139 10:51:32 -- common/autotest_common.sh@945 -- # kill 3492190 00:21:16.139 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.139 00:21:16.139 Latency(us) 00:21:16.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.139 =================================================================================================================== 00:21:16.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.139 10:51:32 -- common/autotest_common.sh@950 -- # wait 3492190 00:21:16.398 10:51:33 -- fips/fips.sh@17 -- # nvmftestfini 00:21:16.398 10:51:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:16.398 10:51:33 -- nvmf/common.sh@116 -- # sync 00:21:16.398 10:51:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:16.398 10:51:33 -- nvmf/common.sh@119 -- # set +e 00:21:16.398 10:51:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:16.398 10:51:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:16.398 rmmod nvme_tcp 00:21:16.398 rmmod nvme_fabrics 00:21:16.398 rmmod nvme_keyring 00:21:16.398 10:51:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:16.655 10:51:33 -- nvmf/common.sh@123 -- # set -e 00:21:16.655 10:51:33 -- nvmf/common.sh@124 -- # return 0 00:21:16.655 10:51:33 -- nvmf/common.sh@477 -- # '[' -n 3491973 ']' 00:21:16.655 10:51:33 -- nvmf/common.sh@478 -- # killprocess 3491973 00:21:16.655 10:51:33 -- common/autotest_common.sh@926 -- # '[' -z 3491973 ']' 00:21:16.655 10:51:33 -- common/autotest_common.sh@930 -- # kill -0 3491973 00:21:16.655 10:51:33 -- common/autotest_common.sh@931 -- # uname 00:21:16.655 10:51:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:16.655 10:51:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3491973 00:21:16.655 10:51:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:16.655 10:51:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:16.655 10:51:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3491973' 00:21:16.655 killing process with pid 3491973 00:21:16.655 10:51:33 -- common/autotest_common.sh@945 -- # kill 3491973 00:21:16.655 10:51:33 -- common/autotest_common.sh@950 -- # wait 3491973 00:21:16.913 10:51:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:16.913 10:51:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:16.913 10:51:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:16.913 10:51:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:16.913 10:51:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:16.913 10:51:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.913 10:51:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.913 10:51:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.810 10:51:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:18.810 10:51:35 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:18.810 00:21:18.810 real 0m17.797s 00:21:18.810 user 0m20.448s 00:21:18.810 sys 0m7.141s 00:21:18.810 10:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.810 10:51:35 -- common/autotest_common.sh@10 -- # set +x 00:21:18.810 ************************************ 00:21:18.810 END TEST nvmf_fips 00:21:18.810 ************************************ 00:21:18.810 10:51:35 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:18.810 10:51:35 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:18.810 10:51:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:18.810 10:51:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:18.810 10:51:35 -- common/autotest_common.sh@10 -- # set +x 00:21:18.810 ************************************ 00:21:18.810 START TEST nvmf_fuzz 00:21:18.810 ************************************ 00:21:18.810 10:51:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:18.810 * Looking for test storage... 00:21:18.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:18.810 10:51:35 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.810 10:51:35 -- nvmf/common.sh@7 -- # uname -s 00:21:18.810 10:51:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.810 10:51:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.810 10:51:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.810 10:51:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.810 10:51:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.068 10:51:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.068 10:51:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.068 10:51:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.068 10:51:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.068 10:51:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.068 10:51:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.068 10:51:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.068 10:51:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.068 10:51:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.068 10:51:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.068 10:51:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.068 10:51:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.068 10:51:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.068 10:51:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.068 10:51:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.068 10:51:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.068 10:51:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.068 10:51:35 -- paths/export.sh@5 -- # export PATH 00:21:19.068 10:51:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.068 10:51:35 -- nvmf/common.sh@46 -- # : 0 00:21:19.068 10:51:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:19.068 10:51:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:19.068 10:51:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:19.068 10:51:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.068 10:51:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.068 10:51:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:19.068 10:51:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:19.068 10:51:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:19.068 10:51:35 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:19.068 10:51:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:19.068 10:51:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.068 10:51:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:19.068 10:51:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:19.068 10:51:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:19.068 10:51:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.068 10:51:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.068 10:51:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.068 10:51:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:19.068 10:51:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:19.068 10:51:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:19.068 10:51:35 -- common/autotest_common.sh@10 -- # set +x 00:21:20.967 10:51:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:20.967 10:51:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:20.967 10:51:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:20.967 10:51:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:20.967 10:51:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:20.967 10:51:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:20.967 10:51:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:20.967 10:51:37 -- nvmf/common.sh@294 -- # net_devs=() 00:21:20.967 10:51:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:20.967 10:51:37 -- nvmf/common.sh@295 -- # e810=() 00:21:20.967 10:51:37 -- nvmf/common.sh@295 -- # local -ga e810 00:21:20.967 10:51:37 -- nvmf/common.sh@296 -- # x722=() 00:21:20.967 10:51:37 -- nvmf/common.sh@296 -- # local -ga x722 00:21:20.967 10:51:37 -- nvmf/common.sh@297 -- # mlx=() 00:21:20.967 10:51:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:20.967 10:51:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.967 10:51:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.967 10:51:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.967 10:51:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.967 10:51:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.967 10:51:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.967 10:51:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.967 10:51:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.967 10:51:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.967 10:51:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.967 10:51:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.967 10:51:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:20.967 10:51:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:20.967 10:51:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:20.967 10:51:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:20.967 10:51:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:20.967 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:20.967 10:51:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:20.967 10:51:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:20.967 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:20.967 10:51:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:20.967 10:51:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:20.967 10:51:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.967 10:51:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:20.967 10:51:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.967 10:51:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:20.967 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:20.967 10:51:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.967 10:51:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:20.967 10:51:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.967 10:51:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:20.967 10:51:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.967 10:51:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:20.967 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:20.967 10:51:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.967 10:51:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:20.967 10:51:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:20.967 10:51:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:20.967 10:51:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:20.967 10:51:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.967 10:51:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.967 10:51:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.967 10:51:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:20.967 10:51:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.967 10:51:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.967 10:51:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:20.967 10:51:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.967 10:51:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.967 10:51:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:20.967 10:51:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:20.967 10:51:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.967 10:51:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.225 10:51:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.225 10:51:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.225 10:51:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:21.225 10:51:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.225 10:51:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.225 10:51:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.225 10:51:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:21.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:21:21.225 00:21:21.225 --- 10.0.0.2 ping statistics --- 00:21:21.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.225 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:21:21.225 10:51:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:21:21.225 00:21:21.225 --- 10.0.0.1 ping statistics --- 00:21:21.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.225 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:21:21.225 10:51:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.225 10:51:37 -- nvmf/common.sh@410 -- # return 0 00:21:21.225 10:51:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:21.225 10:51:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.225 10:51:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:21.225 10:51:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:21.225 10:51:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.225 10:51:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:21.225 10:51:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:21.225 10:51:37 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3495497 00:21:21.225 10:51:37 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:21.225 10:51:37 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:21.225 10:51:37 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3495497 00:21:21.225 10:51:37 -- common/autotest_common.sh@819 -- # '[' -z 3495497 ']' 00:21:21.225 10:51:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.225 10:51:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:21.225 10:51:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.225 10:51:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:21.225 10:51:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.159 10:51:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:22.159 10:51:38 -- common/autotest_common.sh@852 -- # return 0 00:21:22.159 10:51:38 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.159 10:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.159 10:51:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.159 10:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:22.159 10:51:38 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:22.159 10:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.159 10:51:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.159 Malloc0 00:21:22.159 10:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:22.159 10:51:38 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:22.159 10:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.159 10:51:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.159 10:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:22.159 10:51:38 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.159 10:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.159 10:51:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.159 10:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:22.159 10:51:38 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.159 10:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:22.159 10:51:38 -- common/autotest_common.sh@10 -- # set +x 00:21:22.159 10:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:22.159 10:51:38 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:22.159 10:51:38 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:54.331 Fuzzing completed. Shutting down the fuzz application 00:21:54.331 00:21:54.331 Dumping successful admin opcodes: 00:21:54.331 8, 9, 10, 24, 00:21:54.331 Dumping successful io opcodes: 00:21:54.331 0, 9, 00:21:54.331 NS: 0x200003aeff00 I/O qp, Total commands completed: 446790, total successful commands: 2592, random_seed: 3915716544 00:21:54.331 NS: 0x200003aeff00 admin qp, Total commands completed: 55968, total successful commands: 444, random_seed: 1882755136 00:21:54.331 10:52:09 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:54.331 Fuzzing completed. Shutting down the fuzz application 00:21:54.331 00:21:54.331 Dumping successful admin opcodes: 00:21:54.331 24, 00:21:54.331 Dumping successful io opcodes: 00:21:54.331 00:21:54.331 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3759464638 00:21:54.331 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3759584056 00:21:54.331 10:52:10 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.331 10:52:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:54.331 10:52:10 -- common/autotest_common.sh@10 -- # set +x 00:21:54.331 10:52:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:54.331 10:52:10 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:54.331 10:52:10 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:54.331 10:52:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:54.331 10:52:10 -- nvmf/common.sh@116 -- # sync 00:21:54.331 10:52:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:54.331 10:52:10 -- nvmf/common.sh@119 -- # set +e 00:21:54.331 10:52:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:54.331 10:52:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:54.331 rmmod nvme_tcp 00:21:54.331 rmmod nvme_fabrics 00:21:54.331 rmmod nvme_keyring 00:21:54.331 10:52:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:54.331 10:52:10 -- nvmf/common.sh@123 -- # set -e 00:21:54.331 10:52:10 -- nvmf/common.sh@124 -- # return 0 00:21:54.331 10:52:10 -- nvmf/common.sh@477 -- # '[' -n 3495497 ']' 00:21:54.331 10:52:10 -- nvmf/common.sh@478 -- # killprocess 3495497 00:21:54.331 10:52:10 -- common/autotest_common.sh@926 -- # '[' -z 3495497 ']' 00:21:54.331 10:52:10 -- common/autotest_common.sh@930 -- # kill -0 3495497 00:21:54.331 10:52:10 -- common/autotest_common.sh@931 -- # uname 00:21:54.331 10:52:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:54.331 10:52:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3495497 00:21:54.331 10:52:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:54.331 10:52:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:54.331 10:52:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3495497' 00:21:54.331 killing process with pid 3495497 00:21:54.331 10:52:10 -- common/autotest_common.sh@945 -- # kill 3495497 00:21:54.331 10:52:10 -- common/autotest_common.sh@950 -- # wait 3495497 00:21:54.331 10:52:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:54.331 10:52:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:54.331 10:52:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:54.331 10:52:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.331 10:52:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:54.331 10:52:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.331 10:52:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.331 10:52:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.863 10:52:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:56.863 10:52:13 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:56.863 00:21:56.863 real 0m37.632s 00:21:56.863 user 0m51.237s 00:21:56.863 sys 0m15.784s 00:21:56.863 10:52:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.863 10:52:13 -- common/autotest_common.sh@10 -- # set +x 00:21:56.863 ************************************ 00:21:56.863 END TEST nvmf_fuzz 00:21:56.863 ************************************ 00:21:56.863 10:52:13 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:56.863 10:52:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:56.863 10:52:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:56.863 10:52:13 -- common/autotest_common.sh@10 -- # set +x 00:21:56.863 ************************************ 00:21:56.863 START TEST nvmf_multiconnection 00:21:56.863 ************************************ 00:21:56.863 10:52:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:56.863 * Looking for test storage... 00:21:56.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:56.863 10:52:13 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.863 10:52:13 -- nvmf/common.sh@7 -- # uname -s 00:21:56.863 10:52:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.863 10:52:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.863 10:52:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.863 10:52:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.863 10:52:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.863 10:52:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.863 10:52:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.863 10:52:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.863 10:52:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.863 10:52:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.863 10:52:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.863 10:52:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.863 10:52:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.863 10:52:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.863 10:52:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.863 10:52:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.863 10:52:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.864 10:52:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.864 10:52:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.864 10:52:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.864 10:52:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.864 10:52:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.864 10:52:13 -- paths/export.sh@5 -- # export PATH 00:21:56.864 10:52:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.864 10:52:13 -- nvmf/common.sh@46 -- # : 0 00:21:56.864 10:52:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:56.864 10:52:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:56.864 10:52:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:56.864 10:52:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.864 10:52:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.864 10:52:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:56.864 10:52:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:56.864 10:52:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:56.864 10:52:13 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.864 10:52:13 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.864 10:52:13 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:56.864 10:52:13 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:56.864 10:52:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:56.864 10:52:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.864 10:52:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:56.864 10:52:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:56.864 10:52:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:56.864 10:52:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.864 10:52:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.864 10:52:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.864 10:52:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:56.864 10:52:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:56.864 10:52:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:56.864 10:52:13 -- common/autotest_common.sh@10 -- # set +x 00:21:58.763 10:52:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:58.763 10:52:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:58.763 10:52:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:58.763 10:52:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:58.763 10:52:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:58.763 10:52:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:58.763 10:52:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:58.763 10:52:15 -- nvmf/common.sh@294 -- # net_devs=() 00:21:58.763 10:52:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:58.763 10:52:15 -- nvmf/common.sh@295 -- # e810=() 00:21:58.763 10:52:15 -- nvmf/common.sh@295 -- # local -ga e810 00:21:58.763 10:52:15 -- nvmf/common.sh@296 -- # x722=() 00:21:58.763 10:52:15 -- nvmf/common.sh@296 -- # local -ga x722 00:21:58.763 10:52:15 -- nvmf/common.sh@297 -- # mlx=() 00:21:58.763 10:52:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:58.763 10:52:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.763 10:52:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.763 10:52:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.763 10:52:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.763 10:52:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.763 10:52:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.763 10:52:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.763 10:52:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.763 10:52:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.763 10:52:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.763 10:52:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.763 10:52:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:58.763 10:52:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:58.763 10:52:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:58.763 10:52:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:58.763 10:52:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:58.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:58.763 10:52:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:58.763 10:52:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:58.763 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:58.763 10:52:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:58.763 10:52:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:58.763 10:52:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.763 10:52:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:58.763 10:52:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.763 10:52:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:58.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:58.763 10:52:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.763 10:52:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:58.763 10:52:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.763 10:52:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:58.763 10:52:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.763 10:52:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:58.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:58.763 10:52:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.763 10:52:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:58.763 10:52:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:58.763 10:52:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:58.763 10:52:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:58.763 10:52:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.764 10:52:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.764 10:52:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.764 10:52:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:58.764 10:52:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.764 10:52:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.764 10:52:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:58.764 10:52:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.764 10:52:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.764 10:52:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:58.764 10:52:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:58.764 10:52:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.764 10:52:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.764 10:52:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.764 10:52:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.764 10:52:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:58.764 10:52:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.764 10:52:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.764 10:52:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.764 10:52:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:58.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:21:58.764 00:21:58.764 --- 10.0.0.2 ping statistics --- 00:21:58.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.764 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:58.764 10:52:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:21:58.764 00:21:58.764 --- 10.0.0.1 ping statistics --- 00:21:58.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.764 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:21:58.764 10:52:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.764 10:52:15 -- nvmf/common.sh@410 -- # return 0 00:21:58.764 10:52:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:58.764 10:52:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.764 10:52:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:58.764 10:52:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:58.764 10:52:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.764 10:52:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:58.764 10:52:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:58.764 10:52:15 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:58.764 10:52:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:58.764 10:52:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:58.764 10:52:15 -- common/autotest_common.sh@10 -- # set +x 00:21:58.764 10:52:15 -- nvmf/common.sh@469 -- # nvmfpid=3501368 00:21:58.764 10:52:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:58.764 10:52:15 -- nvmf/common.sh@470 -- # waitforlisten 3501368 00:21:58.764 10:52:15 -- common/autotest_common.sh@819 -- # '[' -z 3501368 ']' 00:21:58.764 10:52:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.764 10:52:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:58.764 10:52:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.764 10:52:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:58.764 10:52:15 -- common/autotest_common.sh@10 -- # set +x 00:21:58.764 [2024-07-10 10:52:15.426549] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:58.764 [2024-07-10 10:52:15.426645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.764 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.764 [2024-07-10 10:52:15.492099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.764 [2024-07-10 10:52:15.583442] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:58.764 [2024-07-10 10:52:15.583599] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.764 [2024-07-10 10:52:15.583617] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.764 [2024-07-10 10:52:15.583630] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.764 [2024-07-10 10:52:15.583684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.764 [2024-07-10 10:52:15.583711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.764 [2024-07-10 10:52:15.583836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.764 [2024-07-10 10:52:15.583844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.698 10:52:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:59.698 10:52:16 -- common/autotest_common.sh@852 -- # return 0 00:21:59.698 10:52:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:59.698 10:52:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:59.698 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.698 10:52:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.698 10:52:16 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:59.698 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.698 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.698 [2024-07-10 10:52:16.432134] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.698 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.698 10:52:16 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:59.698 10:52:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.698 10:52:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:59.698 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.698 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.698 Malloc1 00:21:59.698 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.698 10:52:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:59.698 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.698 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.698 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.698 10:52:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:59.698 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.698 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.698 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.698 10:52:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.698 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.698 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.698 [2024-07-10 10:52:16.489586] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.698 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.698 10:52:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.698 10:52:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:59.698 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.698 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.698 Malloc2 00:21:59.698 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.698 10:52:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:59.698 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.698 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.956 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.956 10:52:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:59.956 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.956 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.956 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.956 10:52:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:59.956 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.956 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.956 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.956 10:52:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.956 10:52:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:59.956 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.956 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.956 Malloc3 00:21:59.956 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.956 10:52:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.957 10:52:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 Malloc4 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.957 10:52:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 Malloc5 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.957 10:52:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 Malloc6 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.957 10:52:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 Malloc7 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.957 10:52:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:59.957 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.957 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.215 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.215 10:52:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.215 10:52:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:00.215 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.215 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.215 Malloc8 00:22:00.215 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.216 10:52:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 Malloc9 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.216 10:52:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 Malloc10 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.216 10:52:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 Malloc11 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:00.216 10:52:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.216 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 10:52:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.216 10:52:16 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:00.216 10:52:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.216 10:52:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:01.150 10:52:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:01.150 10:52:17 -- common/autotest_common.sh@1177 -- # local i=0 00:22:01.150 10:52:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:01.150 10:52:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:01.150 10:52:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:03.048 10:52:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:03.048 10:52:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:03.048 10:52:19 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:03.048 10:52:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:03.048 10:52:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:03.048 10:52:19 -- common/autotest_common.sh@1187 -- # return 0 00:22:03.048 10:52:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.048 10:52:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:03.613 10:52:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:03.613 10:52:20 -- common/autotest_common.sh@1177 -- # local i=0 00:22:03.613 10:52:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:03.613 10:52:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:03.613 10:52:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:05.508 10:52:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:05.508 10:52:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:05.508 10:52:22 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:22:05.508 10:52:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:05.508 10:52:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:05.508 10:52:22 -- common/autotest_common.sh@1187 -- # return 0 00:22:05.508 10:52:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:05.508 10:52:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:06.440 10:52:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:06.440 10:52:22 -- common/autotest_common.sh@1177 -- # local i=0 00:22:06.440 10:52:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:06.440 10:52:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:06.440 10:52:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:08.336 10:52:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:08.336 10:52:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:08.336 10:52:24 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:22:08.336 10:52:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:08.336 10:52:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:08.336 10:52:24 -- common/autotest_common.sh@1187 -- # return 0 00:22:08.336 10:52:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:08.336 10:52:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:08.899 10:52:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:08.899 10:52:25 -- common/autotest_common.sh@1177 -- # local i=0 00:22:08.899 10:52:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:08.899 10:52:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:08.899 10:52:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:10.849 10:52:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:10.849 10:52:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:10.849 10:52:27 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:10.849 10:52:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:10.849 10:52:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:10.849 10:52:27 -- common/autotest_common.sh@1187 -- # return 0 00:22:10.849 10:52:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:10.849 10:52:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:11.782 10:52:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:11.782 10:52:28 -- common/autotest_common.sh@1177 -- # local i=0 00:22:11.782 10:52:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:11.782 10:52:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:11.782 10:52:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:13.679 10:52:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:13.679 10:52:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:13.679 10:52:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:13.679 10:52:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:13.679 10:52:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:13.679 10:52:30 -- common/autotest_common.sh@1187 -- # return 0 00:22:13.679 10:52:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:13.679 10:52:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:14.612 10:52:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:14.612 10:52:31 -- common/autotest_common.sh@1177 -- # local i=0 00:22:14.612 10:52:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:14.612 10:52:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:14.612 10:52:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:16.508 10:52:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:16.508 10:52:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:16.508 10:52:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:16.508 10:52:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:16.508 10:52:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:16.508 10:52:33 -- common/autotest_common.sh@1187 -- # return 0 00:22:16.508 10:52:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:16.508 10:52:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:17.441 10:52:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:17.441 10:52:34 -- common/autotest_common.sh@1177 -- # local i=0 00:22:17.441 10:52:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:17.441 10:52:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:17.441 10:52:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:19.334 10:52:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:19.334 10:52:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:19.334 10:52:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:19.334 10:52:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:19.334 10:52:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:19.334 10:52:36 -- common/autotest_common.sh@1187 -- # return 0 00:22:19.334 10:52:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:19.334 10:52:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:20.265 10:52:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:20.265 10:52:36 -- common/autotest_common.sh@1177 -- # local i=0 00:22:20.265 10:52:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:20.265 10:52:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:20.265 10:52:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:22.162 10:52:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:22.162 10:52:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:22.162 10:52:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:22.162 10:52:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:22.162 10:52:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:22.162 10:52:38 -- common/autotest_common.sh@1187 -- # return 0 00:22:22.162 10:52:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:22.162 10:52:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:23.095 10:52:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:23.095 10:52:39 -- common/autotest_common.sh@1177 -- # local i=0 00:22:23.095 10:52:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:23.095 10:52:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:23.095 10:52:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:24.990 10:52:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:24.990 10:52:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:24.990 10:52:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:24.990 10:52:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:24.990 10:52:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:24.990 10:52:41 -- common/autotest_common.sh@1187 -- # return 0 00:22:24.990 10:52:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.990 10:52:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:25.923 10:52:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:25.923 10:52:42 -- common/autotest_common.sh@1177 -- # local i=0 00:22:25.923 10:52:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:25.923 10:52:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:25.923 10:52:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:27.820 10:52:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:27.820 10:52:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:27.820 10:52:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:27.820 10:52:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:27.820 10:52:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:27.820 10:52:44 -- common/autotest_common.sh@1187 -- # return 0 00:22:27.820 10:52:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:27.820 10:52:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:28.752 10:52:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:28.752 10:52:45 -- common/autotest_common.sh@1177 -- # local i=0 00:22:28.752 10:52:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:28.752 10:52:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:28.752 10:52:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:30.651 10:52:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:30.651 10:52:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:30.651 10:52:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:30.651 10:52:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:30.651 10:52:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:30.651 10:52:47 -- common/autotest_common.sh@1187 -- # return 0 00:22:30.651 10:52:47 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:30.651 [global] 00:22:30.651 thread=1 00:22:30.651 invalidate=1 00:22:30.651 rw=read 00:22:30.651 time_based=1 00:22:30.651 runtime=10 00:22:30.651 ioengine=libaio 00:22:30.651 direct=1 00:22:30.651 bs=262144 00:22:30.651 iodepth=64 00:22:30.651 norandommap=1 00:22:30.651 numjobs=1 00:22:30.651 00:22:30.651 [job0] 00:22:30.651 filename=/dev/nvme0n1 00:22:30.651 [job1] 00:22:30.651 filename=/dev/nvme10n1 00:22:30.651 [job2] 00:22:30.651 filename=/dev/nvme1n1 00:22:30.651 [job3] 00:22:30.651 filename=/dev/nvme2n1 00:22:30.651 [job4] 00:22:30.651 filename=/dev/nvme3n1 00:22:30.651 [job5] 00:22:30.651 filename=/dev/nvme4n1 00:22:30.651 [job6] 00:22:30.651 filename=/dev/nvme5n1 00:22:30.651 [job7] 00:22:30.651 filename=/dev/nvme6n1 00:22:30.651 [job8] 00:22:30.651 filename=/dev/nvme7n1 00:22:30.651 [job9] 00:22:30.651 filename=/dev/nvme8n1 00:22:30.651 [job10] 00:22:30.651 filename=/dev/nvme9n1 00:22:30.651 Could not set queue depth (nvme0n1) 00:22:30.651 Could not set queue depth (nvme10n1) 00:22:30.651 Could not set queue depth (nvme1n1) 00:22:30.651 Could not set queue depth (nvme2n1) 00:22:30.651 Could not set queue depth (nvme3n1) 00:22:30.651 Could not set queue depth (nvme4n1) 00:22:30.651 Could not set queue depth (nvme5n1) 00:22:30.651 Could not set queue depth (nvme6n1) 00:22:30.651 Could not set queue depth (nvme7n1) 00:22:30.651 Could not set queue depth (nvme8n1) 00:22:30.651 Could not set queue depth (nvme9n1) 00:22:30.909 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.909 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.909 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.909 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.909 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.909 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.909 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.909 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.909 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.909 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.909 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.909 fio-3.35 00:22:30.909 Starting 11 threads 00:22:43.194 00:22:43.194 job0: (groupid=0, jobs=1): err= 0: pid=3505876: Wed Jul 10 10:52:58 2024 00:22:43.194 read: IOPS=425, BW=106MiB/s (112MB/s)(1074MiB/10089msec) 00:22:43.194 slat (usec): min=10, max=131453, avg=2053.16, stdev=7610.82 00:22:43.194 clat (msec): min=6, max=584, avg=148.09, stdev=68.49 00:22:43.194 lat (msec): min=11, max=584, avg=150.15, stdev=69.86 00:22:43.194 clat percentiles (msec): 00:22:43.194 | 1.00th=[ 20], 5.00th=[ 33], 10.00th=[ 69], 20.00th=[ 106], 00:22:43.194 | 30.00th=[ 123], 40.00th=[ 133], 50.00th=[ 144], 60.00th=[ 155], 00:22:43.194 | 70.00th=[ 171], 80.00th=[ 190], 90.00th=[ 213], 95.00th=[ 236], 00:22:43.194 | 99.00th=[ 435], 99.50th=[ 464], 99.90th=[ 481], 99.95th=[ 481], 00:22:43.194 | 99.99th=[ 584] 00:22:43.194 bw ( KiB/s): min=31744, max=191616, per=6.12%, avg=108345.60, stdev=39275.52, samples=20 00:22:43.194 iops : min= 124, max= 748, avg=423.20, stdev=153.36, samples=20 00:22:43.194 lat (msec) : 10=0.02%, 20=1.16%, 50=6.70%, 100=9.40%, 250=78.00% 00:22:43.194 lat (msec) : 500=4.68%, 750=0.02% 00:22:43.194 cpu : usr=0.29%, sys=1.43%, ctx=970, majf=0, minf=4097 00:22:43.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:22:43.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.194 issued rwts: total=4296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.194 job1: (groupid=0, jobs=1): err= 0: pid=3505877: Wed Jul 10 10:52:58 2024 00:22:43.194 read: IOPS=760, BW=190MiB/s (199MB/s)(1918MiB/10090msec) 00:22:43.194 slat (usec): min=8, max=89041, avg=724.03, stdev=3708.50 00:22:43.194 clat (usec): min=1293, max=288841, avg=83384.75, stdev=57829.88 00:22:43.194 lat (usec): min=1345, max=288902, avg=84108.78, stdev=58439.65 00:22:43.194 clat percentiles (msec): 00:22:43.194 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 19], 20.00th=[ 32], 00:22:43.194 | 30.00th=[ 42], 40.00th=[ 55], 50.00th=[ 75], 60.00th=[ 91], 00:22:43.194 | 70.00th=[ 107], 80.00th=[ 130], 90.00th=[ 163], 95.00th=[ 207], 00:22:43.194 | 99.00th=[ 241], 99.50th=[ 259], 99.90th=[ 284], 99.95th=[ 288], 00:22:43.194 | 99.99th=[ 288] 00:22:43.194 bw ( KiB/s): min=67072, max=388608, per=10.99%, avg=194745.75, stdev=84736.41, samples=20 00:22:43.194 iops : min= 262, max= 1518, avg=760.70, stdev=331.00, samples=20 00:22:43.194 lat (msec) : 2=0.08%, 4=1.00%, 10=3.25%, 20=6.51%, 50=25.16% 00:22:43.194 lat (msec) : 100=30.54%, 250=32.66%, 500=0.81% 00:22:43.194 cpu : usr=0.33%, sys=2.16%, ctx=1857, majf=0, minf=4097 00:22:43.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:43.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.194 issued rwts: total=7671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.194 job2: (groupid=0, jobs=1): err= 0: pid=3505878: Wed Jul 10 10:52:58 2024 00:22:43.194 read: IOPS=562, BW=141MiB/s (148MB/s)(1410MiB/10015msec) 00:22:43.194 slat (usec): min=9, max=140134, avg=1292.80, stdev=5439.56 00:22:43.194 clat (usec): min=1231, max=482926, avg=112317.66, stdev=77429.41 00:22:43.194 lat (usec): min=1260, max=543100, avg=113610.46, stdev=78196.08 00:22:43.194 clat percentiles (msec): 00:22:43.194 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 43], 00:22:43.194 | 30.00th=[ 54], 40.00th=[ 72], 50.00th=[ 100], 60.00th=[ 125], 00:22:43.194 | 70.00th=[ 153], 80.00th=[ 176], 90.00th=[ 203], 95.00th=[ 230], 00:22:43.194 | 99.00th=[ 418], 99.50th=[ 460], 99.90th=[ 477], 99.95th=[ 481], 00:22:43.194 | 99.99th=[ 485] 00:22:43.194 bw ( KiB/s): min=32256, max=376832, per=8.05%, avg=142699.30, stdev=90045.23, samples=20 00:22:43.194 iops : min= 126, max= 1472, avg=557.40, stdev=351.72, samples=20 00:22:43.194 lat (msec) : 2=0.05%, 4=0.05%, 10=0.85%, 20=2.27%, 50=23.45% 00:22:43.194 lat (msec) : 100=23.71%, 250=46.47%, 500=3.14% 00:22:43.194 cpu : usr=0.39%, sys=1.74%, ctx=1337, majf=0, minf=4097 00:22:43.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:43.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.194 issued rwts: total=5638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.194 job3: (groupid=0, jobs=1): err= 0: pid=3505879: Wed Jul 10 10:52:58 2024 00:22:43.194 read: IOPS=484, BW=121MiB/s (127MB/s)(1224MiB/10106msec) 00:22:43.194 slat (usec): min=11, max=169988, avg=1885.71, stdev=6799.54 00:22:43.194 clat (msec): min=5, max=484, avg=130.13, stdev=70.66 00:22:43.194 lat (msec): min=5, max=581, avg=132.01, stdev=71.92 00:22:43.194 clat percentiles (msec): 00:22:43.194 | 1.00th=[ 14], 5.00th=[ 40], 10.00th=[ 55], 20.00th=[ 75], 00:22:43.194 | 30.00th=[ 88], 40.00th=[ 107], 50.00th=[ 120], 60.00th=[ 136], 00:22:43.194 | 70.00th=[ 159], 80.00th=[ 180], 90.00th=[ 207], 95.00th=[ 234], 00:22:43.194 | 99.00th=[ 439], 99.50th=[ 468], 99.90th=[ 481], 99.95th=[ 485], 00:22:43.194 | 99.99th=[ 485] 00:22:43.194 bw ( KiB/s): min=34304, max=230400, per=6.98%, avg=123699.20, stdev=57184.60, samples=20 00:22:43.194 iops : min= 134, max= 900, avg=483.20, stdev=223.38, samples=20 00:22:43.194 lat (msec) : 10=0.55%, 20=1.02%, 50=7.13%, 100=27.49%, 250=60.07% 00:22:43.194 lat (msec) : 500=3.74% 00:22:43.194 cpu : usr=0.25%, sys=1.78%, ctx=1063, majf=0, minf=4097 00:22:43.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:43.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.194 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.194 job4: (groupid=0, jobs=1): err= 0: pid=3505880: Wed Jul 10 10:52:58 2024 00:22:43.194 read: IOPS=485, BW=121MiB/s (127MB/s)(1226MiB/10109msec) 00:22:43.194 slat (usec): min=9, max=99867, avg=1007.19, stdev=5151.56 00:22:43.194 clat (usec): min=856, max=491253, avg=130781.40, stdev=72878.02 00:22:43.194 lat (usec): min=881, max=491280, avg=131788.59, stdev=73557.15 00:22:43.194 clat percentiles (msec): 00:22:43.194 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 79], 00:22:43.194 | 30.00th=[ 102], 40.00th=[ 118], 50.00th=[ 131], 60.00th=[ 144], 00:22:43.194 | 70.00th=[ 157], 80.00th=[ 178], 90.00th=[ 209], 95.00th=[ 239], 00:22:43.194 | 99.00th=[ 393], 99.50th=[ 439], 99.90th=[ 464], 99.95th=[ 481], 00:22:43.194 | 99.99th=[ 493] 00:22:43.194 bw ( KiB/s): min=63488, max=201216, per=6.99%, avg=123911.75, stdev=37962.26, samples=20 00:22:43.194 iops : min= 248, max= 786, avg=484.00, stdev=148.24, samples=20 00:22:43.194 lat (usec) : 1000=0.02% 00:22:43.194 lat (msec) : 2=0.20%, 4=0.75%, 10=3.26%, 20=5.99%, 50=2.87% 00:22:43.194 lat (msec) : 100=16.70%, 250=66.10%, 500=4.10% 00:22:43.194 cpu : usr=0.22%, sys=1.39%, ctx=1453, majf=0, minf=4097 00:22:43.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:43.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.194 issued rwts: total=4905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.194 job5: (groupid=0, jobs=1): err= 0: pid=3505881: Wed Jul 10 10:52:58 2024 00:22:43.194 read: IOPS=796, BW=199MiB/s (209MB/s)(2010MiB/10088msec) 00:22:43.194 slat (usec): min=9, max=100171, avg=805.71, stdev=3984.58 00:22:43.194 clat (usec): min=919, max=330566, avg=79450.12, stdev=60921.72 00:22:43.194 lat (usec): min=941, max=353693, avg=80255.83, stdev=61571.12 00:22:43.194 clat percentiles (msec): 00:22:43.194 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 16], 20.00th=[ 31], 00:22:43.194 | 30.00th=[ 41], 40.00th=[ 52], 50.00th=[ 62], 60.00th=[ 79], 00:22:43.194 | 70.00th=[ 95], 80.00th=[ 122], 90.00th=[ 180], 95.00th=[ 215], 00:22:43.194 | 99.00th=[ 247], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 288], 00:22:43.194 | 99.99th=[ 330] 00:22:43.194 bw ( KiB/s): min=78848, max=435712, per=11.52%, avg=204145.85, stdev=102354.16, samples=20 00:22:43.194 iops : min= 308, max= 1702, avg=797.40, stdev=399.85, samples=20 00:22:43.194 lat (usec) : 1000=0.02% 00:22:43.194 lat (msec) : 2=0.16%, 4=0.45%, 10=3.68%, 20=9.02%, 50=25.32% 00:22:43.194 lat (msec) : 100=33.52%, 250=26.86%, 500=0.97% 00:22:43.194 cpu : usr=0.46%, sys=2.38%, ctx=1891, majf=0, minf=4097 00:22:43.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:43.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.194 issued rwts: total=8038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.194 job6: (groupid=0, jobs=1): err= 0: pid=3505882: Wed Jul 10 10:52:58 2024 00:22:43.194 read: IOPS=728, BW=182MiB/s (191MB/s)(1841MiB/10108msec) 00:22:43.194 slat (usec): min=9, max=379889, avg=645.13, stdev=6287.48 00:22:43.194 clat (usec): min=1082, max=498076, avg=87147.76, stdev=72607.18 00:22:43.194 lat (usec): min=1120, max=669122, avg=87792.89, stdev=73354.32 00:22:43.194 clat percentiles (msec): 00:22:43.194 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 28], 00:22:43.194 | 30.00th=[ 40], 40.00th=[ 53], 50.00th=[ 68], 60.00th=[ 87], 00:22:43.194 | 70.00th=[ 109], 80.00th=[ 144], 90.00th=[ 180], 95.00th=[ 215], 00:22:43.194 | 99.00th=[ 296], 99.50th=[ 472], 99.90th=[ 485], 99.95th=[ 485], 00:22:43.194 | 99.99th=[ 498] 00:22:43.194 bw ( KiB/s): min=61952, max=291840, per=10.54%, avg=186828.55, stdev=67675.51, samples=20 00:22:43.194 iops : min= 242, max= 1140, avg=729.75, stdev=264.30, samples=20 00:22:43.194 lat (msec) : 2=0.20%, 4=1.20%, 10=5.00%, 20=8.12%, 50=23.68% 00:22:43.194 lat (msec) : 100=28.43%, 250=30.93%, 500=2.44% 00:22:43.194 cpu : usr=0.30%, sys=1.97%, ctx=1946, majf=0, minf=4097 00:22:43.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:43.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.194 issued rwts: total=7362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.194 job7: (groupid=0, jobs=1): err= 0: pid=3505883: Wed Jul 10 10:52:58 2024 00:22:43.194 read: IOPS=664, BW=166MiB/s (174MB/s)(1680MiB/10108msec) 00:22:43.194 slat (usec): min=13, max=107948, avg=1339.63, stdev=4466.18 00:22:43.194 clat (msec): min=13, max=477, avg=94.86, stdev=65.62 00:22:43.194 lat (msec): min=13, max=542, avg=96.20, stdev=66.52 00:22:43.194 clat percentiles (msec): 00:22:43.194 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 32], 20.00th=[ 34], 00:22:43.194 | 30.00th=[ 45], 40.00th=[ 63], 50.00th=[ 75], 60.00th=[ 92], 00:22:43.194 | 70.00th=[ 134], 80.00th=[ 155], 90.00th=[ 176], 95.00th=[ 197], 00:22:43.194 | 99.00th=[ 309], 99.50th=[ 443], 99.90th=[ 468], 99.95th=[ 472], 00:22:43.194 | 99.99th=[ 477] 00:22:43.194 bw ( KiB/s): min=55296, max=508928, per=9.61%, avg=170351.05, stdev=119250.39, samples=20 00:22:43.194 iops : min= 216, max= 1988, avg=665.40, stdev=465.82, samples=20 00:22:43.194 lat (msec) : 20=0.12%, 50=32.93%, 100=28.95%, 250=36.69%, 500=1.31% 00:22:43.194 cpu : usr=0.53%, sys=2.18%, ctx=1457, majf=0, minf=3721 00:22:43.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:43.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.194 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.194 job8: (groupid=0, jobs=1): err= 0: pid=3505890: Wed Jul 10 10:52:58 2024 00:22:43.194 read: IOPS=752, BW=188MiB/s (197MB/s)(1898MiB/10089msec) 00:22:43.194 slat (usec): min=9, max=165563, avg=699.91, stdev=4467.75 00:22:43.194 clat (usec): min=1415, max=494654, avg=84268.12, stdev=71606.70 00:22:43.194 lat (usec): min=1471, max=494671, avg=84968.02, stdev=72051.79 00:22:43.194 clat percentiles (msec): 00:22:43.194 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 14], 20.00th=[ 22], 00:22:43.194 | 30.00th=[ 33], 40.00th=[ 50], 50.00th=[ 68], 60.00th=[ 85], 00:22:43.194 | 70.00th=[ 111], 80.00th=[ 144], 90.00th=[ 184], 95.00th=[ 218], 00:22:43.194 | 99.00th=[ 275], 99.50th=[ 460], 99.90th=[ 481], 99.95th=[ 485], 00:22:43.194 | 99.99th=[ 493] 00:22:43.194 bw ( KiB/s): min=67584, max=390144, per=10.88%, avg=192766.05, stdev=81274.90, samples=20 00:22:43.194 iops : min= 264, max= 1524, avg=752.95, stdev=317.50, samples=20 00:22:43.194 lat (msec) : 2=0.14%, 4=1.17%, 10=4.68%, 20=12.49%, 50=21.73% 00:22:43.194 lat (msec) : 100=25.81%, 250=32.58%, 500=1.40% 00:22:43.194 cpu : usr=0.40%, sys=2.10%, ctx=1902, majf=0, minf=4097 00:22:43.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:43.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.194 issued rwts: total=7593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.194 job9: (groupid=0, jobs=1): err= 0: pid=3505891: Wed Jul 10 10:52:58 2024 00:22:43.194 read: IOPS=730, BW=183MiB/s (191MB/s)(1844MiB/10104msec) 00:22:43.194 slat (usec): min=9, max=251404, avg=1256.98, stdev=5181.93 00:22:43.194 clat (msec): min=6, max=354, avg=86.33, stdev=47.87 00:22:43.194 lat (msec): min=6, max=462, avg=87.58, stdev=48.51 00:22:43.194 clat percentiles (msec): 00:22:43.194 | 1.00th=[ 19], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 43], 00:22:43.194 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 79], 60.00th=[ 89], 00:22:43.194 | 70.00th=[ 101], 80.00th=[ 121], 90.00th=[ 146], 95.00th=[ 167], 00:22:43.194 | 99.00th=[ 249], 99.50th=[ 321], 99.90th=[ 330], 99.95th=[ 330], 00:22:43.194 | 99.99th=[ 355] 00:22:43.194 bw ( KiB/s): min=49664, max=417280, per=10.57%, avg=187238.40, stdev=81088.85, samples=20 00:22:43.194 iops : min= 194, max= 1630, avg=731.40, stdev=316.75, samples=20 00:22:43.194 lat (msec) : 10=0.24%, 20=1.08%, 50=22.80%, 100=45.61%, 250=29.33% 00:22:43.194 lat (msec) : 500=0.92% 00:22:43.194 cpu : usr=0.37%, sys=2.43%, ctx=1460, majf=0, minf=4097 00:22:43.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:43.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.194 issued rwts: total=7377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.194 job10: (groupid=0, jobs=1): err= 0: pid=3505892: Wed Jul 10 10:52:58 2024 00:22:43.194 read: IOPS=541, BW=135MiB/s (142MB/s)(1368MiB/10109msec) 00:22:43.194 slat (usec): min=10, max=72654, avg=1443.70, stdev=4475.10 00:22:43.194 clat (msec): min=2, max=450, avg=116.70, stdev=53.27 00:22:43.194 lat (msec): min=2, max=450, avg=118.15, stdev=53.96 00:22:43.194 clat percentiles (msec): 00:22:43.194 | 1.00th=[ 29], 5.00th=[ 41], 10.00th=[ 57], 20.00th=[ 73], 00:22:43.194 | 30.00th=[ 83], 40.00th=[ 93], 50.00th=[ 112], 60.00th=[ 128], 00:22:43.194 | 70.00th=[ 140], 80.00th=[ 157], 90.00th=[ 192], 95.00th=[ 211], 00:22:43.194 | 99.00th=[ 243], 99.50th=[ 264], 99.90th=[ 443], 99.95th=[ 447], 00:22:43.194 | 99.99th=[ 451] 00:22:43.194 bw ( KiB/s): min=71680, max=230400, per=7.81%, avg=138415.75, stdev=46393.86, samples=20 00:22:43.194 iops : min= 280, max= 900, avg=540.65, stdev=181.18, samples=20 00:22:43.194 lat (msec) : 4=0.27%, 10=0.07%, 20=0.07%, 50=6.91%, 100=36.44% 00:22:43.194 lat (msec) : 250=55.54%, 500=0.69% 00:22:43.194 cpu : usr=0.42%, sys=1.90%, ctx=1292, majf=0, minf=4097 00:22:43.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:43.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:43.194 issued rwts: total=5470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:43.194 00:22:43.194 Run status group 0 (all jobs): 00:22:43.194 READ: bw=1730MiB/s (1814MB/s), 106MiB/s-199MiB/s (112MB/s-209MB/s), io=17.1GiB (18.3GB), run=10015-10109msec 00:22:43.195 00:22:43.195 Disk stats (read/write): 00:22:43.195 nvme0n1: ios=8416/0, merge=0/0, ticks=1233417/0, in_queue=1233417, util=97.23% 00:22:43.195 nvme10n1: ios=15171/0, merge=0/0, ticks=1238762/0, in_queue=1238762, util=97.46% 00:22:43.195 nvme1n1: ios=10922/0, merge=0/0, ticks=1242686/0, in_queue=1242686, util=97.72% 00:22:43.195 nvme2n1: ios=9571/0, merge=0/0, ticks=1228348/0, in_queue=1228348, util=97.83% 00:22:43.195 nvme3n1: ios=9622/0, merge=0/0, ticks=1242708/0, in_queue=1242708, util=97.92% 00:22:43.195 nvme4n1: ios=15899/0, merge=0/0, ticks=1237710/0, in_queue=1237710, util=98.26% 00:22:43.195 nvme5n1: ios=14505/0, merge=0/0, ticks=1238042/0, in_queue=1238042, util=98.41% 00:22:43.195 nvme6n1: ios=13243/0, merge=0/0, ticks=1230590/0, in_queue=1230590, util=98.52% 00:22:43.195 nvme7n1: ios=15021/0, merge=0/0, ticks=1241754/0, in_queue=1241754, util=98.94% 00:22:43.195 nvme8n1: ios=14545/0, merge=0/0, ticks=1226931/0, in_queue=1226931, util=99.11% 00:22:43.195 nvme9n1: ios=10740/0, merge=0/0, ticks=1232767/0, in_queue=1232767, util=99.23% 00:22:43.195 10:52:58 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:43.195 [global] 00:22:43.195 thread=1 00:22:43.195 invalidate=1 00:22:43.195 rw=randwrite 00:22:43.195 time_based=1 00:22:43.195 runtime=10 00:22:43.195 ioengine=libaio 00:22:43.195 direct=1 00:22:43.195 bs=262144 00:22:43.195 iodepth=64 00:22:43.195 norandommap=1 00:22:43.195 numjobs=1 00:22:43.195 00:22:43.195 [job0] 00:22:43.195 filename=/dev/nvme0n1 00:22:43.195 [job1] 00:22:43.195 filename=/dev/nvme10n1 00:22:43.195 [job2] 00:22:43.195 filename=/dev/nvme1n1 00:22:43.195 [job3] 00:22:43.195 filename=/dev/nvme2n1 00:22:43.195 [job4] 00:22:43.195 filename=/dev/nvme3n1 00:22:43.195 [job5] 00:22:43.195 filename=/dev/nvme4n1 00:22:43.195 [job6] 00:22:43.195 filename=/dev/nvme5n1 00:22:43.195 [job7] 00:22:43.195 filename=/dev/nvme6n1 00:22:43.195 [job8] 00:22:43.195 filename=/dev/nvme7n1 00:22:43.195 [job9] 00:22:43.195 filename=/dev/nvme8n1 00:22:43.195 [job10] 00:22:43.195 filename=/dev/nvme9n1 00:22:43.195 Could not set queue depth (nvme0n1) 00:22:43.195 Could not set queue depth (nvme10n1) 00:22:43.195 Could not set queue depth (nvme1n1) 00:22:43.195 Could not set queue depth (nvme2n1) 00:22:43.195 Could not set queue depth (nvme3n1) 00:22:43.195 Could not set queue depth (nvme4n1) 00:22:43.195 Could not set queue depth (nvme5n1) 00:22:43.195 Could not set queue depth (nvme6n1) 00:22:43.195 Could not set queue depth (nvme7n1) 00:22:43.195 Could not set queue depth (nvme8n1) 00:22:43.195 Could not set queue depth (nvme9n1) 00:22:43.195 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.195 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.195 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.195 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.195 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.195 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.195 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.195 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.195 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.195 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.195 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:43.195 fio-3.35 00:22:43.195 Starting 11 threads 00:22:53.170 00:22:53.170 job0: (groupid=0, jobs=1): err= 0: pid=3506938: Wed Jul 10 10:53:09 2024 00:22:53.170 write: IOPS=405, BW=101MiB/s (106MB/s)(1025MiB/10120msec); 0 zone resets 00:22:53.170 slat (usec): min=24, max=269951, avg=2432.34, stdev=7563.84 00:22:53.170 clat (msec): min=2, max=336, avg=155.37, stdev=66.86 00:22:53.170 lat (msec): min=2, max=336, avg=157.81, stdev=67.49 00:22:53.170 clat percentiles (msec): 00:22:53.170 | 1.00th=[ 14], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 104], 00:22:53.170 | 30.00th=[ 127], 40.00th=[ 144], 50.00th=[ 161], 60.00th=[ 176], 00:22:53.170 | 70.00th=[ 190], 80.00th=[ 213], 90.00th=[ 236], 95.00th=[ 257], 00:22:53.170 | 99.00th=[ 309], 99.50th=[ 321], 99.90th=[ 334], 99.95th=[ 338], 00:22:53.170 | 99.99th=[ 338] 00:22:53.170 bw ( KiB/s): min=63488, max=204288, per=8.32%, avg=103389.75, stdev=37078.05, samples=20 00:22:53.170 iops : min= 248, max= 798, avg=403.85, stdev=144.81, samples=20 00:22:53.170 lat (msec) : 4=0.12%, 10=0.54%, 20=1.17%, 50=9.83%, 100=7.83% 00:22:53.170 lat (msec) : 250=74.42%, 500=6.10% 00:22:53.170 cpu : usr=1.26%, sys=1.10%, ctx=960, majf=0, minf=1 00:22:53.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:53.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:53.170 issued rwts: total=0,4101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.170 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.170 job1: (groupid=0, jobs=1): err= 0: pid=3506950: Wed Jul 10 10:53:09 2024 00:22:53.170 write: IOPS=393, BW=98.5MiB/s (103MB/s)(1001MiB/10163msec); 0 zone resets 00:22:53.170 slat (usec): min=22, max=150631, avg=1774.29, stdev=5552.73 00:22:53.170 clat (usec): min=1381, max=513063, avg=160628.15, stdev=88533.65 00:22:53.170 lat (usec): min=1424, max=513097, avg=162402.44, stdev=89717.52 00:22:53.170 clat percentiles (msec): 00:22:53.170 | 1.00th=[ 5], 5.00th=[ 20], 10.00th=[ 46], 20.00th=[ 97], 00:22:53.170 | 30.00th=[ 123], 40.00th=[ 138], 50.00th=[ 155], 60.00th=[ 174], 00:22:53.170 | 70.00th=[ 194], 80.00th=[ 226], 90.00th=[ 259], 95.00th=[ 292], 00:22:53.170 | 99.00th=[ 489], 99.50th=[ 498], 99.90th=[ 514], 99.95th=[ 514], 00:22:53.170 | 99.99th=[ 514] 00:22:53.170 bw ( KiB/s): min=34816, max=171520, per=8.12%, avg=100864.00, stdev=38545.42, samples=20 00:22:53.170 iops : min= 136, max= 670, avg=394.00, stdev=150.57, samples=20 00:22:53.170 lat (msec) : 2=0.20%, 4=0.57%, 10=1.82%, 20=2.47%, 50=6.20% 00:22:53.170 lat (msec) : 100=10.62%, 250=66.63%, 500=11.09%, 750=0.40% 00:22:53.170 cpu : usr=1.10%, sys=1.32%, ctx=2221, majf=0, minf=1 00:22:53.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:53.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:53.170 issued rwts: total=0,4003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.170 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.170 job2: (groupid=0, jobs=1): err= 0: pid=3506956: Wed Jul 10 10:53:09 2024 00:22:53.170 write: IOPS=473, BW=118MiB/s (124MB/s)(1199MiB/10115msec); 0 zone resets 00:22:53.170 slat (usec): min=22, max=115756, avg=1912.01, stdev=4420.76 00:22:53.170 clat (usec): min=1294, max=309723, avg=133041.33, stdev=63592.85 00:22:53.170 lat (usec): min=1369, max=330893, avg=134953.34, stdev=64446.02 00:22:53.170 clat percentiles (msec): 00:22:53.170 | 1.00th=[ 9], 5.00th=[ 26], 10.00th=[ 44], 20.00th=[ 66], 00:22:53.170 | 30.00th=[ 89], 40.00th=[ 123], 50.00th=[ 146], 60.00th=[ 161], 00:22:53.170 | 70.00th=[ 174], 80.00th=[ 190], 90.00th=[ 209], 95.00th=[ 226], 00:22:53.170 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 309], 00:22:53.170 | 99.99th=[ 309] 00:22:53.170 bw ( KiB/s): min=71680, max=274432, per=9.75%, avg=121128.90, stdev=46510.34, samples=20 00:22:53.170 iops : min= 280, max= 1072, avg=473.15, stdev=181.67, samples=20 00:22:53.170 lat (msec) : 2=0.06%, 4=0.08%, 10=1.50%, 20=2.29%, 50=7.07% 00:22:53.170 lat (msec) : 100=20.76%, 250=66.40%, 500=1.84% 00:22:53.170 cpu : usr=1.45%, sys=1.53%, ctx=1825, majf=0, minf=1 00:22:53.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:53.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:53.170 issued rwts: total=0,4794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.170 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.170 job3: (groupid=0, jobs=1): err= 0: pid=3506968: Wed Jul 10 10:53:09 2024 00:22:53.170 write: IOPS=401, BW=100MiB/s (105MB/s)(1015MiB/10120msec); 0 zone resets 00:22:53.170 slat (usec): min=15, max=167135, avg=1860.06, stdev=6482.85 00:22:53.170 clat (usec): min=1332, max=582003, avg=157558.69, stdev=92875.52 00:22:53.170 lat (usec): min=1393, max=582074, avg=159418.75, stdev=94178.11 00:22:53.170 clat percentiles (msec): 00:22:53.170 | 1.00th=[ 4], 5.00th=[ 19], 10.00th=[ 48], 20.00th=[ 82], 00:22:53.170 | 30.00th=[ 131], 40.00th=[ 146], 50.00th=[ 153], 60.00th=[ 165], 00:22:53.170 | 70.00th=[ 178], 80.00th=[ 207], 90.00th=[ 259], 95.00th=[ 292], 00:22:53.170 | 99.00th=[ 542], 99.50th=[ 567], 99.90th=[ 584], 99.95th=[ 584], 00:22:53.170 | 99.99th=[ 584] 00:22:53.170 bw ( KiB/s): min=26624, max=196608, per=8.24%, avg=102348.80, stdev=43256.51, samples=20 00:22:53.170 iops : min= 104, max= 768, avg=399.80, stdev=168.97, samples=20 00:22:53.170 lat (msec) : 2=0.22%, 4=0.96%, 10=2.31%, 20=1.92%, 50=4.97% 00:22:53.170 lat (msec) : 100=14.16%, 250=64.10%, 500=9.97%, 750=1.38% 00:22:53.170 cpu : usr=1.20%, sys=1.43%, ctx=2235, majf=0, minf=1 00:22:53.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:53.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:53.170 issued rwts: total=0,4061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.170 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.170 job4: (groupid=0, jobs=1): err= 0: pid=3506975: Wed Jul 10 10:53:09 2024 00:22:53.170 write: IOPS=514, BW=129MiB/s (135MB/s)(1303MiB/10132msec); 0 zone resets 00:22:53.170 slat (usec): min=17, max=96060, avg=1246.73, stdev=3999.29 00:22:53.170 clat (usec): min=1090, max=593722, avg=123068.72, stdev=83547.86 00:22:53.170 lat (usec): min=1117, max=593783, avg=124315.45, stdev=84393.03 00:22:53.170 clat percentiles (msec): 00:22:53.170 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 10], 20.00th=[ 31], 00:22:53.170 | 30.00th=[ 79], 40.00th=[ 104], 50.00th=[ 136], 60.00th=[ 148], 00:22:53.170 | 70.00th=[ 163], 80.00th=[ 182], 90.00th=[ 220], 95.00th=[ 262], 00:22:53.170 | 99.00th=[ 342], 99.50th=[ 443], 99.90th=[ 510], 99.95th=[ 592], 00:22:53.170 | 99.99th=[ 592] 00:22:53.170 bw ( KiB/s): min=63488, max=306176, per=10.61%, avg=131850.35, stdev=53393.97, samples=20 00:22:53.170 iops : min= 248, max= 1196, avg=515.00, stdev=208.59, samples=20 00:22:53.170 lat (msec) : 2=0.42%, 4=5.77%, 10=3.82%, 20=4.34%, 50=11.01% 00:22:53.170 lat (msec) : 100=13.60%, 250=54.82%, 500=6.08%, 750=0.13% 00:22:53.170 cpu : usr=1.40%, sys=1.59%, ctx=3318, majf=0, minf=1 00:22:53.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:53.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:53.170 issued rwts: total=0,5213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.170 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.170 job5: (groupid=0, jobs=1): err= 0: pid=3507006: Wed Jul 10 10:53:09 2024 00:22:53.170 write: IOPS=475, BW=119MiB/s (125MB/s)(1209MiB/10159msec); 0 zone resets 00:22:53.170 slat (usec): min=17, max=327769, avg=1528.17, stdev=6991.42 00:22:53.170 clat (usec): min=1251, max=528224, avg=132866.36, stdev=86762.89 00:22:53.170 lat (usec): min=1288, max=533469, avg=134394.54, stdev=87713.48 00:22:53.170 clat percentiles (msec): 00:22:53.170 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 22], 20.00th=[ 51], 00:22:53.170 | 30.00th=[ 80], 40.00th=[ 110], 50.00th=[ 136], 60.00th=[ 159], 00:22:53.170 | 70.00th=[ 174], 80.00th=[ 190], 90.00th=[ 222], 95.00th=[ 271], 00:22:53.170 | 99.00th=[ 447], 99.50th=[ 477], 99.90th=[ 518], 99.95th=[ 523], 00:22:53.170 | 99.99th=[ 527] 00:22:53.170 bw ( KiB/s): min=63488, max=274432, per=9.83%, avg=122171.80, stdev=52850.27, samples=20 00:22:53.170 iops : min= 248, max= 1072, avg=477.20, stdev=206.47, samples=20 00:22:53.170 lat (msec) : 2=0.10%, 4=0.89%, 10=3.27%, 20=5.27%, 50=10.11% 00:22:53.170 lat (msec) : 100=17.70%, 250=56.96%, 500=5.40%, 750=0.29% 00:22:53.170 cpu : usr=1.43%, sys=1.54%, ctx=2792, majf=0, minf=1 00:22:53.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:53.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:53.170 issued rwts: total=0,4835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.170 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.170 job6: (groupid=0, jobs=1): err= 0: pid=3507031: Wed Jul 10 10:53:09 2024 00:22:53.170 write: IOPS=469, BW=117MiB/s (123MB/s)(1187MiB/10116msec); 0 zone resets 00:22:53.170 slat (usec): min=19, max=84841, avg=1347.88, stdev=3976.54 00:22:53.170 clat (usec): min=1335, max=311032, avg=134942.88, stdev=68835.00 00:22:53.170 lat (usec): min=1405, max=311085, avg=136290.76, stdev=69636.89 00:22:53.170 clat percentiles (msec): 00:22:53.170 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 36], 20.00th=[ 72], 00:22:53.170 | 30.00th=[ 100], 40.00th=[ 126], 50.00th=[ 142], 60.00th=[ 150], 00:22:53.170 | 70.00th=[ 161], 80.00th=[ 192], 90.00th=[ 236], 95.00th=[ 253], 00:22:53.170 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 309], 99.95th=[ 309], 00:22:53.170 | 99.99th=[ 313] 00:22:53.170 bw ( KiB/s): min=63488, max=202752, per=9.65%, avg=119943.70, stdev=37624.48, samples=20 00:22:53.170 iops : min= 248, max= 792, avg=468.50, stdev=147.01, samples=20 00:22:53.170 lat (msec) : 2=0.21%, 4=0.48%, 10=1.52%, 20=3.26%, 50=9.79% 00:22:53.170 lat (msec) : 100=15.33%, 250=63.29%, 500=6.11% 00:22:53.170 cpu : usr=1.37%, sys=1.57%, ctx=2912, majf=0, minf=1 00:22:53.170 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:53.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:53.170 issued rwts: total=0,4748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.170 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.170 job7: (groupid=0, jobs=1): err= 0: pid=3507046: Wed Jul 10 10:53:09 2024 00:22:53.170 write: IOPS=357, BW=89.3MiB/s (93.6MB/s)(907MiB/10158msec); 0 zone resets 00:22:53.170 slat (usec): min=19, max=86473, avg=2003.00, stdev=5770.61 00:22:53.170 clat (usec): min=1197, max=541986, avg=177075.31, stdev=86617.66 00:22:53.170 lat (usec): min=1233, max=551599, avg=179078.31, stdev=87667.87 00:22:53.170 clat percentiles (msec): 00:22:53.171 | 1.00th=[ 5], 5.00th=[ 29], 10.00th=[ 59], 20.00th=[ 117], 00:22:53.171 | 30.00th=[ 150], 40.00th=[ 161], 50.00th=[ 171], 60.00th=[ 184], 00:22:53.171 | 70.00th=[ 211], 80.00th=[ 247], 90.00th=[ 275], 95.00th=[ 321], 00:22:53.171 | 99.00th=[ 460], 99.50th=[ 464], 99.90th=[ 477], 99.95th=[ 477], 00:22:53.171 | 99.99th=[ 542] 00:22:53.171 bw ( KiB/s): min=47104, max=141824, per=7.34%, avg=91273.50, stdev=29129.26, samples=20 00:22:53.171 iops : min= 184, max= 554, avg=356.50, stdev=113.78, samples=20 00:22:53.171 lat (msec) : 2=0.28%, 4=0.58%, 10=2.32%, 20=0.91%, 50=4.38% 00:22:53.171 lat (msec) : 100=9.73%, 250=63.78%, 500=18.00%, 750=0.03% 00:22:53.171 cpu : usr=1.10%, sys=1.22%, ctx=2033, majf=0, minf=1 00:22:53.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:22:53.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:53.171 issued rwts: total=0,3628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.171 job8: (groupid=0, jobs=1): err= 0: pid=3507100: Wed Jul 10 10:53:09 2024 00:22:53.171 write: IOPS=444, BW=111MiB/s (117MB/s)(1124MiB/10113msec); 0 zone resets 00:22:53.171 slat (usec): min=12, max=251733, avg=1486.20, stdev=5907.29 00:22:53.171 clat (usec): min=1089, max=496969, avg=142371.89, stdev=89103.50 00:22:53.171 lat (usec): min=1116, max=497003, avg=143858.09, stdev=90241.79 00:22:53.171 clat percentiles (usec): 00:22:53.171 | 1.00th=[ 1565], 5.00th=[ 10421], 10.00th=[ 19792], 20.00th=[ 50070], 00:22:53.171 | 30.00th=[ 91751], 40.00th=[125305], 50.00th=[149947], 60.00th=[162530], 00:22:53.171 | 70.00th=[181404], 80.00th=[210764], 90.00th=[248513], 95.00th=[283116], 00:22:53.171 | 99.00th=[400557], 99.50th=[459277], 99.90th=[488637], 99.95th=[492831], 00:22:53.171 | 99.99th=[497026] 00:22:53.171 bw ( KiB/s): min=54893, max=252416, per=9.13%, avg=113515.85, stdev=45964.79, samples=20 00:22:53.171 iops : min= 214, max= 986, avg=443.40, stdev=179.58, samples=20 00:22:53.171 lat (msec) : 2=1.20%, 4=0.58%, 10=2.91%, 20=5.34%, 50=10.01% 00:22:53.171 lat (msec) : 100=11.41%, 250=58.75%, 500=9.81% 00:22:53.171 cpu : usr=1.31%, sys=1.55%, ctx=2902, majf=0, minf=1 00:22:53.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:53.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:53.171 issued rwts: total=0,4497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.171 job9: (groupid=0, jobs=1): err= 0: pid=3507110: Wed Jul 10 10:53:09 2024 00:22:53.171 write: IOPS=569, BW=142MiB/s (149MB/s)(1440MiB/10113msec); 0 zone resets 00:22:53.171 slat (usec): min=18, max=139022, avg=1365.83, stdev=4259.49 00:22:53.171 clat (usec): min=1326, max=473273, avg=110943.02, stdev=81860.42 00:22:53.171 lat (usec): min=1427, max=475207, avg=112308.85, stdev=82801.40 00:22:53.171 clat percentiles (msec): 00:22:53.171 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 21], 20.00th=[ 53], 00:22:53.171 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 78], 60.00th=[ 124], 00:22:53.171 | 70.00th=[ 148], 80.00th=[ 174], 90.00th=[ 226], 95.00th=[ 255], 00:22:53.171 | 99.00th=[ 393], 99.50th=[ 451], 99.90th=[ 464], 99.95th=[ 468], 00:22:53.171 | 99.99th=[ 472] 00:22:53.171 bw ( KiB/s): min=51302, max=293888, per=11.74%, avg=145848.30, stdev=79578.92, samples=20 00:22:53.171 iops : min= 200, max= 1148, avg=569.70, stdev=310.88, samples=20 00:22:53.171 lat (msec) : 2=0.09%, 4=0.47%, 10=2.38%, 20=7.12%, 50=8.30% 00:22:53.171 lat (msec) : 100=36.42%, 250=39.44%, 500=5.78% 00:22:53.171 cpu : usr=1.79%, sys=1.81%, ctx=2829, majf=0, minf=1 00:22:53.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:53.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:53.171 issued rwts: total=0,5760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.171 job10: (groupid=0, jobs=1): err= 0: pid=3507111: Wed Jul 10 10:53:09 2024 00:22:53.171 write: IOPS=364, BW=91.0MiB/s (95.5MB/s)(925MiB/10158msec); 0 zone resets 00:22:53.171 slat (usec): min=25, max=170286, avg=2314.31, stdev=6358.71 00:22:53.171 clat (msec): min=2, max=566, avg=173.34, stdev=86.20 00:22:53.171 lat (msec): min=3, max=566, avg=175.65, stdev=87.23 00:22:53.171 clat percentiles (msec): 00:22:53.171 | 1.00th=[ 18], 5.00th=[ 70], 10.00th=[ 87], 20.00th=[ 114], 00:22:53.171 | 30.00th=[ 138], 40.00th=[ 148], 50.00th=[ 161], 60.00th=[ 171], 00:22:53.171 | 70.00th=[ 188], 80.00th=[ 209], 90.00th=[ 275], 95.00th=[ 351], 00:22:53.171 | 99.00th=[ 506], 99.50th=[ 550], 99.90th=[ 567], 99.95th=[ 567], 00:22:53.171 | 99.99th=[ 567] 00:22:53.171 bw ( KiB/s): min=28672, max=150528, per=7.49%, avg=93089.10, stdev=34961.70, samples=20 00:22:53.171 iops : min= 112, max= 588, avg=363.60, stdev=136.58, samples=20 00:22:53.171 lat (msec) : 4=0.08%, 10=0.24%, 20=0.84%, 50=2.54%, 100=12.27% 00:22:53.171 lat (msec) : 250=71.70%, 500=11.22%, 750=1.11% 00:22:53.171 cpu : usr=1.04%, sys=1.23%, ctx=1412, majf=0, minf=1 00:22:53.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:22:53.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:53.171 issued rwts: total=0,3699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:53.171 00:22:53.171 Run status group 0 (all jobs): 00:22:53.171 WRITE: bw=1214MiB/s (1273MB/s), 89.3MiB/s-142MiB/s (93.6MB/s-149MB/s), io=12.0GiB (12.9GB), run=10113-10163msec 00:22:53.171 00:22:53.171 Disk stats (read/write): 00:22:53.171 nvme0n1: ios=56/8000, merge=0/0, ticks=3613/1107938, in_queue=1111551, util=99.67% 00:22:53.171 nvme10n1: ios=41/7971, merge=0/0, ticks=39/1243391, in_queue=1243430, util=97.24% 00:22:53.171 nvme1n1: ios=42/9399, merge=0/0, ticks=584/1202465, in_queue=1203049, util=99.85% 00:22:53.171 nvme2n1: ios=0/7923, merge=0/0, ticks=0/1212580, in_queue=1212580, util=97.54% 00:22:53.171 nvme3n1: ios=0/10178, merge=0/0, ticks=0/1217821, in_queue=1217821, util=97.60% 00:22:53.171 nvme4n1: ios=0/9662, merge=0/0, ticks=0/1245927, in_queue=1245927, util=98.04% 00:22:53.171 nvme5n1: ios=0/9242, merge=0/0, ticks=0/1217467, in_queue=1217467, util=98.18% 00:22:53.171 nvme6n1: ios=0/7250, merge=0/0, ticks=0/1244740, in_queue=1244740, util=98.34% 00:22:53.171 nvme7n1: ios=0/8762, merge=0/0, ticks=0/1216621, in_queue=1216621, util=98.68% 00:22:53.171 nvme8n1: ios=0/11333, merge=0/0, ticks=0/1214214, in_queue=1214214, util=98.96% 00:22:53.171 nvme9n1: ios=0/7391, merge=0/0, ticks=0/1238210, in_queue=1238210, util=99.10% 00:22:53.171 10:53:09 -- target/multiconnection.sh@36 -- # sync 00:22:53.171 10:53:09 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:53.171 10:53:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.171 10:53:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:53.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:53.171 10:53:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:53.171 10:53:09 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.171 10:53:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.171 10:53:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:53.171 10:53:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.171 10:53:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:53.171 10:53:09 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.171 10:53:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.171 10:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.171 10:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:53.171 10:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.171 10:53:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.171 10:53:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:53.171 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:53.171 10:53:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:53.171 10:53:09 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.171 10:53:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.171 10:53:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:53.171 10:53:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.171 10:53:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:53.171 10:53:09 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.171 10:53:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:53.171 10:53:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.171 10:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:53.171 10:53:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.171 10:53:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.171 10:53:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:53.429 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:53.429 10:53:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:53.429 10:53:10 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.429 10:53:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.429 10:53:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:53.429 10:53:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.429 10:53:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:53.429 10:53:10 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.429 10:53:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:53.429 10:53:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.429 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.429 10:53:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.429 10:53:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.429 10:53:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:53.688 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:53.688 10:53:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:53.688 10:53:10 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.688 10:53:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.688 10:53:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:53.688 10:53:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.688 10:53:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:53.688 10:53:10 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.688 10:53:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:53.688 10:53:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.688 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.688 10:53:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.688 10:53:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.688 10:53:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:53.945 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:53.945 10:53:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:53.945 10:53:10 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.945 10:53:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.945 10:53:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:53.945 10:53:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.945 10:53:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:53.946 10:53:10 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.946 10:53:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:53.946 10:53:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.946 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.946 10:53:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.946 10:53:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.946 10:53:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:53.946 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:53.946 10:53:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:53.946 10:53:10 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.946 10:53:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.946 10:53:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:53.946 10:53:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.946 10:53:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:53.946 10:53:10 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.946 10:53:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:53.946 10:53:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.946 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.946 10:53:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.946 10:53:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.946 10:53:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:54.510 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:54.510 10:53:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:54.510 10:53:11 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.510 10:53:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.510 10:53:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:54.510 10:53:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.510 10:53:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:54.510 10:53:11 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.510 10:53:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:54.510 10:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.510 10:53:11 -- common/autotest_common.sh@10 -- # set +x 00:22:54.510 10:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.510 10:53:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.510 10:53:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:54.510 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:54.510 10:53:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:54.510 10:53:11 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.510 10:53:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.510 10:53:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:54.510 10:53:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.510 10:53:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:54.511 10:53:11 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.511 10:53:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:54.511 10:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.511 10:53:11 -- common/autotest_common.sh@10 -- # set +x 00:22:54.511 10:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.511 10:53:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.511 10:53:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:54.511 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:54.511 10:53:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:54.511 10:53:11 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.511 10:53:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.511 10:53:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:54.768 10:53:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.768 10:53:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:54.768 10:53:11 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.768 10:53:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:54.768 10:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.768 10:53:11 -- common/autotest_common.sh@10 -- # set +x 00:22:54.768 10:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.768 10:53:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.768 10:53:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:54.768 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:54.768 10:53:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:54.768 10:53:11 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.768 10:53:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.768 10:53:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:54.768 10:53:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.768 10:53:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:54.768 10:53:11 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.768 10:53:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:54.768 10:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.768 10:53:11 -- common/autotest_common.sh@10 -- # set +x 00:22:54.768 10:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.768 10:53:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.768 10:53:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:54.768 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:54.768 10:53:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:54.768 10:53:11 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.768 10:53:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.768 10:53:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:54.768 10:53:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.768 10:53:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:54.768 10:53:11 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.768 10:53:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:54.768 10:53:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.768 10:53:11 -- common/autotest_common.sh@10 -- # set +x 00:22:54.768 10:53:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.768 10:53:11 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:54.768 10:53:11 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:54.768 10:53:11 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:54.768 10:53:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:54.768 10:53:11 -- nvmf/common.sh@116 -- # sync 00:22:54.768 10:53:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:54.768 10:53:11 -- nvmf/common.sh@119 -- # set +e 00:22:54.768 10:53:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:54.768 10:53:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:54.768 rmmod nvme_tcp 00:22:54.768 rmmod nvme_fabrics 00:22:54.768 rmmod nvme_keyring 00:22:55.026 10:53:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:55.026 10:53:11 -- nvmf/common.sh@123 -- # set -e 00:22:55.026 10:53:11 -- nvmf/common.sh@124 -- # return 0 00:22:55.026 10:53:11 -- nvmf/common.sh@477 -- # '[' -n 3501368 ']' 00:22:55.026 10:53:11 -- nvmf/common.sh@478 -- # killprocess 3501368 00:22:55.026 10:53:11 -- common/autotest_common.sh@926 -- # '[' -z 3501368 ']' 00:22:55.026 10:53:11 -- common/autotest_common.sh@930 -- # kill -0 3501368 00:22:55.026 10:53:11 -- common/autotest_common.sh@931 -- # uname 00:22:55.026 10:53:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:55.026 10:53:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3501368 00:22:55.026 10:53:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:55.026 10:53:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:55.026 10:53:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3501368' 00:22:55.026 killing process with pid 3501368 00:22:55.026 10:53:11 -- common/autotest_common.sh@945 -- # kill 3501368 00:22:55.026 10:53:11 -- common/autotest_common.sh@950 -- # wait 3501368 00:22:55.591 10:53:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:55.591 10:53:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:55.591 10:53:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:55.591 10:53:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:55.591 10:53:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:55.591 10:53:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.591 10:53:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.591 10:53:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.493 10:53:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:57.493 00:22:57.493 real 1m0.949s 00:22:57.493 user 3m21.275s 00:22:57.493 sys 0m24.170s 00:22:57.493 10:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:57.493 10:53:14 -- common/autotest_common.sh@10 -- # set +x 00:22:57.493 ************************************ 00:22:57.493 END TEST nvmf_multiconnection 00:22:57.493 ************************************ 00:22:57.493 10:53:14 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:57.493 10:53:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:57.493 10:53:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:57.493 10:53:14 -- common/autotest_common.sh@10 -- # set +x 00:22:57.493 ************************************ 00:22:57.493 START TEST nvmf_initiator_timeout 00:22:57.493 ************************************ 00:22:57.493 10:53:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:57.493 * Looking for test storage... 00:22:57.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:57.493 10:53:14 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.493 10:53:14 -- nvmf/common.sh@7 -- # uname -s 00:22:57.493 10:53:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.493 10:53:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.493 10:53:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.493 10:53:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.493 10:53:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.493 10:53:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.493 10:53:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.493 10:53:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.493 10:53:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.493 10:53:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.493 10:53:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:57.493 10:53:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:57.493 10:53:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.494 10:53:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.494 10:53:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.494 10:53:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.494 10:53:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.494 10:53:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.494 10:53:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.494 10:53:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.494 10:53:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.494 10:53:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.494 10:53:14 -- paths/export.sh@5 -- # export PATH 00:22:57.494 10:53:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.494 10:53:14 -- nvmf/common.sh@46 -- # : 0 00:22:57.494 10:53:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:57.494 10:53:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:57.494 10:53:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:57.494 10:53:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.494 10:53:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.494 10:53:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:57.494 10:53:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:57.494 10:53:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:57.494 10:53:14 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:57.494 10:53:14 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:57.494 10:53:14 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:57.494 10:53:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:57.494 10:53:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.494 10:53:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:57.494 10:53:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:57.494 10:53:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:57.494 10:53:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.494 10:53:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.494 10:53:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.494 10:53:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:57.494 10:53:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:57.494 10:53:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:57.494 10:53:14 -- common/autotest_common.sh@10 -- # set +x 00:22:59.394 10:53:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:59.394 10:53:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:59.394 10:53:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:59.394 10:53:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:59.394 10:53:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:59.394 10:53:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:59.394 10:53:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:59.394 10:53:16 -- nvmf/common.sh@294 -- # net_devs=() 00:22:59.394 10:53:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:59.394 10:53:16 -- nvmf/common.sh@295 -- # e810=() 00:22:59.394 10:53:16 -- nvmf/common.sh@295 -- # local -ga e810 00:22:59.394 10:53:16 -- nvmf/common.sh@296 -- # x722=() 00:22:59.394 10:53:16 -- nvmf/common.sh@296 -- # local -ga x722 00:22:59.394 10:53:16 -- nvmf/common.sh@297 -- # mlx=() 00:22:59.394 10:53:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:59.394 10:53:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.394 10:53:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.394 10:53:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.394 10:53:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.394 10:53:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.394 10:53:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.394 10:53:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.394 10:53:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.394 10:53:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.394 10:53:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.394 10:53:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.394 10:53:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:59.394 10:53:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:59.394 10:53:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:59.394 10:53:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:59.394 10:53:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:59.394 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:59.394 10:53:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:59.394 10:53:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:59.394 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:59.394 10:53:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:59.394 10:53:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:59.394 10:53:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.394 10:53:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:59.394 10:53:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.394 10:53:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:59.394 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:59.394 10:53:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.394 10:53:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:59.394 10:53:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.394 10:53:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:59.394 10:53:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.394 10:53:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:59.394 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:59.394 10:53:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.394 10:53:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:59.394 10:53:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:59.394 10:53:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:59.394 10:53:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:59.394 10:53:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.394 10:53:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.394 10:53:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.394 10:53:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:59.394 10:53:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.394 10:53:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.394 10:53:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:59.394 10:53:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.394 10:53:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.394 10:53:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:59.394 10:53:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:59.394 10:53:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.394 10:53:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.652 10:53:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.652 10:53:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.652 10:53:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:59.652 10:53:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.652 10:53:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.652 10:53:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.652 10:53:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:59.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:22:59.652 00:22:59.652 --- 10.0.0.2 ping statistics --- 00:22:59.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.652 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:22:59.652 10:53:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:22:59.652 00:22:59.652 --- 10.0.0.1 ping statistics --- 00:22:59.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.652 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:59.652 10:53:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.652 10:53:16 -- nvmf/common.sh@410 -- # return 0 00:22:59.652 10:53:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:59.652 10:53:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.652 10:53:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:59.652 10:53:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:59.652 10:53:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.652 10:53:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:59.652 10:53:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:59.652 10:53:16 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:59.652 10:53:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:59.652 10:53:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:59.652 10:53:16 -- common/autotest_common.sh@10 -- # set +x 00:22:59.652 10:53:16 -- nvmf/common.sh@469 -- # nvmfpid=3510487 00:22:59.652 10:53:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:59.652 10:53:16 -- nvmf/common.sh@470 -- # waitforlisten 3510487 00:22:59.652 10:53:16 -- common/autotest_common.sh@819 -- # '[' -z 3510487 ']' 00:22:59.652 10:53:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.652 10:53:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:59.652 10:53:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.652 10:53:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:59.652 10:53:16 -- common/autotest_common.sh@10 -- # set +x 00:22:59.652 [2024-07-10 10:53:16.357879] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:59.653 [2024-07-10 10:53:16.357950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.653 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.653 [2024-07-10 10:53:16.421046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.910 [2024-07-10 10:53:16.505893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:59.910 [2024-07-10 10:53:16.506049] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.910 [2024-07-10 10:53:16.506066] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.910 [2024-07-10 10:53:16.506080] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.910 [2024-07-10 10:53:16.506130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.910 [2024-07-10 10:53:16.506194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.910 [2024-07-10 10:53:16.506257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.910 [2024-07-10 10:53:16.506259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.844 10:53:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:00.844 10:53:17 -- common/autotest_common.sh@852 -- # return 0 00:23:00.844 10:53:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:00.844 10:53:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:00.844 10:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 10:53:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.844 10:53:17 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:00.844 10:53:17 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:00.844 10:53:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.844 10:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 Malloc0 00:23:00.844 10:53:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.844 10:53:17 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:00.844 10:53:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.844 10:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 Delay0 00:23:00.844 10:53:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.844 10:53:17 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.844 10:53:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.844 10:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 [2024-07-10 10:53:17.409321] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.844 10:53:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.844 10:53:17 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:00.844 10:53:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.844 10:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 10:53:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.844 10:53:17 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:00.844 10:53:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.844 10:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 10:53:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.844 10:53:17 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.844 10:53:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.844 10:53:17 -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 [2024-07-10 10:53:17.437630] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.844 10:53:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.844 10:53:17 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:01.410 10:53:18 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:01.410 10:53:18 -- common/autotest_common.sh@1177 -- # local i=0 00:23:01.410 10:53:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:01.410 10:53:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:01.410 10:53:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:03.309 10:53:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:03.309 10:53:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:03.309 10:53:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:03.567 10:53:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:03.567 10:53:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:03.567 10:53:20 -- common/autotest_common.sh@1187 -- # return 0 00:23:03.567 10:53:20 -- target/initiator_timeout.sh@35 -- # fio_pid=3510932 00:23:03.567 10:53:20 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:03.567 10:53:20 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:03.567 [global] 00:23:03.567 thread=1 00:23:03.567 invalidate=1 00:23:03.567 rw=write 00:23:03.567 time_based=1 00:23:03.567 runtime=60 00:23:03.567 ioengine=libaio 00:23:03.567 direct=1 00:23:03.567 bs=4096 00:23:03.567 iodepth=1 00:23:03.567 norandommap=0 00:23:03.567 numjobs=1 00:23:03.567 00:23:03.567 verify_dump=1 00:23:03.567 verify_backlog=512 00:23:03.567 verify_state_save=0 00:23:03.567 do_verify=1 00:23:03.567 verify=crc32c-intel 00:23:03.567 [job0] 00:23:03.567 filename=/dev/nvme0n1 00:23:03.567 Could not set queue depth (nvme0n1) 00:23:03.567 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:03.567 fio-3.35 00:23:03.567 Starting 1 thread 00:23:06.845 10:53:23 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:06.845 10:53:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.845 10:53:23 -- common/autotest_common.sh@10 -- # set +x 00:23:06.845 true 00:23:06.845 10:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.845 10:53:23 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:06.845 10:53:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.846 10:53:23 -- common/autotest_common.sh@10 -- # set +x 00:23:06.846 true 00:23:06.846 10:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.846 10:53:23 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:06.846 10:53:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.846 10:53:23 -- common/autotest_common.sh@10 -- # set +x 00:23:06.846 true 00:23:06.846 10:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.846 10:53:23 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:06.846 10:53:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.846 10:53:23 -- common/autotest_common.sh@10 -- # set +x 00:23:06.846 true 00:23:06.846 10:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.846 10:53:23 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:09.372 10:53:26 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:09.372 10:53:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.372 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:23:09.372 true 00:23:09.372 10:53:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.372 10:53:26 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:09.372 10:53:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.372 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:23:09.372 true 00:23:09.372 10:53:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.372 10:53:26 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:09.372 10:53:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.372 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:23:09.372 true 00:23:09.372 10:53:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.372 10:53:26 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:09.628 10:53:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.628 10:53:26 -- common/autotest_common.sh@10 -- # set +x 00:23:09.628 true 00:23:09.628 10:53:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.628 10:53:26 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:09.628 10:53:26 -- target/initiator_timeout.sh@54 -- # wait 3510932 00:24:05.823 00:24:05.823 job0: (groupid=0, jobs=1): err= 0: pid=3511008: Wed Jul 10 10:54:20 2024 00:24:05.823 read: IOPS=50, BW=201KiB/s (206kB/s)(11.8MiB/60033msec) 00:24:05.823 slat (nsec): min=5748, max=73416, avg=16556.31, stdev=9953.51 00:24:05.823 clat (usec): min=297, max=41221k, avg=19594.16, stdev=750730.91 00:24:05.823 lat (usec): min=308, max=41221k, avg=19610.72, stdev=750730.95 00:24:05.823 clat percentiles (usec): 00:24:05.823 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 367], 00:24:05.823 | 20.00th=[ 408], 30.00th=[ 424], 40.00th=[ 437], 00:24:05.823 | 50.00th=[ 445], 60.00th=[ 457], 70.00th=[ 469], 00:24:05.823 | 80.00th=[ 494], 90.00th=[ 41157], 95.00th=[ 42206], 00:24:05.823 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:24:05.823 | 99.95th=[ 43779], 99.99th=[17112761] 00:24:05.823 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60033msec); 0 zone resets 00:24:05.823 slat (usec): min=5, max=9986, avg=21.50, stdev=213.14 00:24:05.823 clat (usec): min=205, max=505, avg=264.11, stdev=44.31 00:24:05.823 lat (usec): min=214, max=10334, avg=285.62, stdev=221.50 00:24:05.823 clat percentiles (usec): 00:24:05.823 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:24:05.823 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 255], 00:24:05.823 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 363], 00:24:05.823 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 469], 99.95th=[ 494], 00:24:05.823 | 99.99th=[ 506] 00:24:05.823 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=6 00:24:05.823 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=6 00:24:05.823 lat (usec) : 250=27.37%, 500=63.92%, 750=2.05%, 1000=0.03% 00:24:05.823 lat (msec) : 50=6.60%, >=2000=0.02% 00:24:05.823 cpu : usr=0.09%, sys=0.22%, ctx=6089, majf=0, minf=2 00:24:05.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:05.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.823 issued rwts: total=3015,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:05.823 00:24:05.823 Run status group 0 (all jobs): 00:24:05.823 READ: bw=201KiB/s (206kB/s), 201KiB/s-201KiB/s (206kB/s-206kB/s), io=11.8MiB (12.3MB), run=60033-60033msec 00:24:05.823 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60033-60033msec 00:24:05.823 00:24:05.823 Disk stats (read/write): 00:24:05.823 nvme0n1: ios=3110/3072, merge=0/0, ticks=18871/793, in_queue=19664, util=99.88% 00:24:05.823 10:54:20 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:05.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:05.823 10:54:20 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:05.823 10:54:20 -- common/autotest_common.sh@1198 -- # local i=0 00:24:05.823 10:54:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:05.823 10:54:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:05.823 10:54:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:05.823 10:54:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:05.823 10:54:20 -- common/autotest_common.sh@1210 -- # return 0 00:24:05.823 10:54:20 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:05.823 10:54:20 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:05.823 nvmf hotplug test: fio successful as expected 00:24:05.823 10:54:20 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.823 10:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:05.823 10:54:20 -- common/autotest_common.sh@10 -- # set +x 00:24:05.823 10:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:05.823 10:54:20 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:05.823 10:54:20 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:05.823 10:54:20 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:05.823 10:54:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:05.823 10:54:20 -- nvmf/common.sh@116 -- # sync 00:24:05.823 10:54:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:05.823 10:54:20 -- nvmf/common.sh@119 -- # set +e 00:24:05.823 10:54:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:05.823 10:54:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:05.823 rmmod nvme_tcp 00:24:05.823 rmmod nvme_fabrics 00:24:05.823 rmmod nvme_keyring 00:24:05.823 10:54:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:05.823 10:54:20 -- nvmf/common.sh@123 -- # set -e 00:24:05.823 10:54:20 -- nvmf/common.sh@124 -- # return 0 00:24:05.823 10:54:20 -- nvmf/common.sh@477 -- # '[' -n 3510487 ']' 00:24:05.823 10:54:20 -- nvmf/common.sh@478 -- # killprocess 3510487 00:24:05.823 10:54:20 -- common/autotest_common.sh@926 -- # '[' -z 3510487 ']' 00:24:05.823 10:54:20 -- common/autotest_common.sh@930 -- # kill -0 3510487 00:24:05.823 10:54:20 -- common/autotest_common.sh@931 -- # uname 00:24:05.823 10:54:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:05.823 10:54:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3510487 00:24:05.823 10:54:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:05.823 10:54:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:05.823 10:54:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3510487' 00:24:05.823 killing process with pid 3510487 00:24:05.823 10:54:20 -- common/autotest_common.sh@945 -- # kill 3510487 00:24:05.823 10:54:20 -- common/autotest_common.sh@950 -- # wait 3510487 00:24:05.823 10:54:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:05.823 10:54:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:05.823 10:54:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:05.823 10:54:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.823 10:54:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:05.823 10:54:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.823 10:54:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.823 10:54:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.388 10:54:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:06.388 00:24:06.388 real 1m8.763s 00:24:06.388 user 4m14.013s 00:24:06.388 sys 0m6.406s 00:24:06.388 10:54:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:06.388 10:54:22 -- common/autotest_common.sh@10 -- # set +x 00:24:06.388 ************************************ 00:24:06.388 END TEST nvmf_initiator_timeout 00:24:06.388 ************************************ 00:24:06.388 10:54:22 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:06.388 10:54:22 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:06.388 10:54:22 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:06.388 10:54:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:06.388 10:54:22 -- common/autotest_common.sh@10 -- # set +x 00:24:08.289 10:54:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:08.289 10:54:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:08.289 10:54:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:08.289 10:54:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:08.289 10:54:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:08.289 10:54:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:08.289 10:54:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:08.289 10:54:24 -- nvmf/common.sh@294 -- # net_devs=() 00:24:08.289 10:54:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:08.289 10:54:24 -- nvmf/common.sh@295 -- # e810=() 00:24:08.289 10:54:24 -- nvmf/common.sh@295 -- # local -ga e810 00:24:08.289 10:54:24 -- nvmf/common.sh@296 -- # x722=() 00:24:08.289 10:54:24 -- nvmf/common.sh@296 -- # local -ga x722 00:24:08.289 10:54:24 -- nvmf/common.sh@297 -- # mlx=() 00:24:08.289 10:54:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:08.289 10:54:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.289 10:54:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.289 10:54:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.289 10:54:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.289 10:54:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.289 10:54:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.289 10:54:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.289 10:54:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.289 10:54:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.289 10:54:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.289 10:54:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.289 10:54:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:08.289 10:54:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:08.289 10:54:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:08.289 10:54:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:08.289 10:54:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:08.289 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:08.289 10:54:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:08.289 10:54:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:08.289 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:08.289 10:54:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:08.289 10:54:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:08.289 10:54:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.289 10:54:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:08.289 10:54:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.289 10:54:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:08.289 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:08.289 10:54:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.289 10:54:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:08.289 10:54:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.289 10:54:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:08.289 10:54:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.289 10:54:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:08.289 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:08.289 10:54:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.289 10:54:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:08.289 10:54:24 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.289 10:54:24 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:24:08.289 10:54:24 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:08.289 10:54:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:08.289 10:54:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:08.289 10:54:24 -- common/autotest_common.sh@10 -- # set +x 00:24:08.289 ************************************ 00:24:08.289 START TEST nvmf_perf_adq 00:24:08.289 ************************************ 00:24:08.289 10:54:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:08.289 * Looking for test storage... 00:24:08.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:08.289 10:54:24 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.289 10:54:24 -- nvmf/common.sh@7 -- # uname -s 00:24:08.289 10:54:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.289 10:54:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.289 10:54:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.289 10:54:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.289 10:54:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.289 10:54:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.289 10:54:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.289 10:54:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.289 10:54:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.289 10:54:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.289 10:54:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.289 10:54:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.289 10:54:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.289 10:54:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.289 10:54:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.289 10:54:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.289 10:54:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.289 10:54:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.289 10:54:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.289 10:54:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.289 10:54:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.289 10:54:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.289 10:54:24 -- paths/export.sh@5 -- # export PATH 00:24:08.289 10:54:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.289 10:54:24 -- nvmf/common.sh@46 -- # : 0 00:24:08.289 10:54:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:08.290 10:54:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:08.290 10:54:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:08.290 10:54:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.290 10:54:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.290 10:54:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:08.290 10:54:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:08.290 10:54:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:08.290 10:54:24 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:08.290 10:54:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:08.290 10:54:24 -- common/autotest_common.sh@10 -- # set +x 00:24:10.193 10:54:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:10.193 10:54:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:10.193 10:54:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:10.193 10:54:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:10.193 10:54:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:10.193 10:54:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:10.193 10:54:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:10.193 10:54:26 -- nvmf/common.sh@294 -- # net_devs=() 00:24:10.193 10:54:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:10.193 10:54:26 -- nvmf/common.sh@295 -- # e810=() 00:24:10.193 10:54:26 -- nvmf/common.sh@295 -- # local -ga e810 00:24:10.193 10:54:26 -- nvmf/common.sh@296 -- # x722=() 00:24:10.193 10:54:26 -- nvmf/common.sh@296 -- # local -ga x722 00:24:10.194 10:54:26 -- nvmf/common.sh@297 -- # mlx=() 00:24:10.194 10:54:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:10.194 10:54:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.194 10:54:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.194 10:54:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.194 10:54:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.194 10:54:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.194 10:54:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.194 10:54:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.194 10:54:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.194 10:54:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.194 10:54:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.194 10:54:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.194 10:54:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:10.194 10:54:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:10.194 10:54:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:10.194 10:54:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.194 10:54:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:10.194 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:10.194 10:54:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.194 10:54:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:10.194 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:10.194 10:54:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:10.194 10:54:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:10.194 10:54:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.194 10:54:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.194 10:54:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.194 10:54:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.194 10:54:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:10.194 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:10.194 10:54:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.194 10:54:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.194 10:54:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.194 10:54:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.194 10:54:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.194 10:54:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:10.194 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:10.194 10:54:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.194 10:54:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:10.194 10:54:26 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.194 10:54:26 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:10.194 10:54:26 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:10.194 10:54:26 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:10.194 10:54:26 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:10.760 10:54:27 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:12.661 10:54:29 -- target/perf_adq.sh@54 -- # sleep 5 00:24:17.933 10:54:34 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:17.933 10:54:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:17.933 10:54:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.933 10:54:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:17.933 10:54:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:17.933 10:54:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:17.933 10:54:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.933 10:54:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.933 10:54:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.933 10:54:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:17.933 10:54:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:17.933 10:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:17.933 10:54:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:17.933 10:54:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:17.933 10:54:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:17.933 10:54:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:17.933 10:54:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:17.933 10:54:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:17.933 10:54:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:17.933 10:54:34 -- nvmf/common.sh@294 -- # net_devs=() 00:24:17.933 10:54:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:17.933 10:54:34 -- nvmf/common.sh@295 -- # e810=() 00:24:17.933 10:54:34 -- nvmf/common.sh@295 -- # local -ga e810 00:24:17.933 10:54:34 -- nvmf/common.sh@296 -- # x722=() 00:24:17.933 10:54:34 -- nvmf/common.sh@296 -- # local -ga x722 00:24:17.933 10:54:34 -- nvmf/common.sh@297 -- # mlx=() 00:24:17.933 10:54:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:17.933 10:54:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.933 10:54:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.933 10:54:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.933 10:54:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.933 10:54:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.933 10:54:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.933 10:54:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.933 10:54:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.933 10:54:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.933 10:54:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.933 10:54:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.933 10:54:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:17.933 10:54:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:17.933 10:54:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:17.933 10:54:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:17.933 10:54:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:17.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:17.933 10:54:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:17.933 10:54:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:17.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:17.933 10:54:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:17.933 10:54:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:17.933 10:54:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.933 10:54:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:17.933 10:54:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.933 10:54:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:17.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:17.933 10:54:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.933 10:54:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:17.933 10:54:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.933 10:54:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:17.933 10:54:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.933 10:54:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:17.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:17.933 10:54:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.933 10:54:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:17.933 10:54:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:17.933 10:54:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:17.933 10:54:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.933 10:54:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.933 10:54:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.933 10:54:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:17.933 10:54:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.933 10:54:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.933 10:54:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:17.933 10:54:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.933 10:54:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.933 10:54:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:17.933 10:54:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:17.933 10:54:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.933 10:54:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.933 10:54:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.933 10:54:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.933 10:54:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:17.933 10:54:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.933 10:54:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.933 10:54:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.933 10:54:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:17.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:24:17.933 00:24:17.933 --- 10.0.0.2 ping statistics --- 00:24:17.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.933 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:24:17.933 10:54:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:24:17.933 00:24:17.933 --- 10.0.0.1 ping statistics --- 00:24:17.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.933 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:17.933 10:54:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.933 10:54:34 -- nvmf/common.sh@410 -- # return 0 00:24:17.933 10:54:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:17.933 10:54:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.933 10:54:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:17.933 10:54:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.933 10:54:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:17.933 10:54:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:17.933 10:54:34 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:17.933 10:54:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:17.933 10:54:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:17.933 10:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:17.933 10:54:34 -- nvmf/common.sh@469 -- # nvmfpid=3523524 00:24:17.933 10:54:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:17.933 10:54:34 -- nvmf/common.sh@470 -- # waitforlisten 3523524 00:24:17.933 10:54:34 -- common/autotest_common.sh@819 -- # '[' -z 3523524 ']' 00:24:17.933 10:54:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.933 10:54:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:17.933 10:54:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.933 10:54:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:17.933 10:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:17.933 [2024-07-10 10:54:34.701175] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:17.933 [2024-07-10 10:54:34.701248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.933 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.259 [2024-07-10 10:54:34.768786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:18.259 [2024-07-10 10:54:34.855069] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:18.259 [2024-07-10 10:54:34.855198] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.259 [2024-07-10 10:54:34.855215] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.259 [2024-07-10 10:54:34.855228] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.259 [2024-07-10 10:54:34.855292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.259 [2024-07-10 10:54:34.855344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.259 [2024-07-10 10:54:34.855410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.259 [2024-07-10 10:54:34.855413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.259 10:54:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:18.259 10:54:34 -- common/autotest_common.sh@852 -- # return 0 00:24:18.259 10:54:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:18.259 10:54:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:18.259 10:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:18.259 10:54:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.259 10:54:34 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:18.259 10:54:34 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:18.259 10:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.259 10:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:18.259 10:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.259 10:54:34 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:18.259 10:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.259 10:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:18.259 10:54:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.259 10:54:35 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:18.259 10:54:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.259 10:54:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.259 [2024-07-10 10:54:35.053334] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.259 10:54:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.259 10:54:35 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:18.259 10:54:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.259 10:54:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.559 Malloc1 00:24:18.559 10:54:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.559 10:54:35 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:18.559 10:54:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.559 10:54:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.559 10:54:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.559 10:54:35 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:18.559 10:54:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.559 10:54:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.559 10:54:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.559 10:54:35 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:18.559 10:54:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.559 10:54:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.559 [2024-07-10 10:54:35.106791] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.559 10:54:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.559 10:54:35 -- target/perf_adq.sh@73 -- # perfpid=3523561 00:24:18.559 10:54:35 -- target/perf_adq.sh@74 -- # sleep 2 00:24:18.559 10:54:35 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:18.559 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.458 10:54:37 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:20.458 10:54:37 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:20.458 10:54:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:20.458 10:54:37 -- target/perf_adq.sh@76 -- # wc -l 00:24:20.458 10:54:37 -- common/autotest_common.sh@10 -- # set +x 00:24:20.458 10:54:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:20.458 10:54:37 -- target/perf_adq.sh@76 -- # count=4 00:24:20.458 10:54:37 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:20.458 10:54:37 -- target/perf_adq.sh@81 -- # wait 3523561 00:24:28.597 Initializing NVMe Controllers 00:24:28.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:28.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:28.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:28.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:28.597 Initialization complete. Launching workers. 00:24:28.597 ======================================================== 00:24:28.597 Latency(us) 00:24:28.597 Device Information : IOPS MiB/s Average min max 00:24:28.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11805.80 46.12 5421.24 1507.51 9142.72 00:24:28.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9319.70 36.41 6867.21 1460.83 10572.81 00:24:28.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11474.20 44.82 5577.85 1019.84 9172.61 00:24:28.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11600.90 45.32 5516.85 1297.89 9202.66 00:24:28.597 ======================================================== 00:24:28.597 Total : 44200.59 172.66 5791.87 1019.84 10572.81 00:24:28.597 00:24:28.597 10:54:45 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:28.597 10:54:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:28.597 10:54:45 -- nvmf/common.sh@116 -- # sync 00:24:28.597 10:54:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:28.597 10:54:45 -- nvmf/common.sh@119 -- # set +e 00:24:28.597 10:54:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:28.597 10:54:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:28.597 rmmod nvme_tcp 00:24:28.597 rmmod nvme_fabrics 00:24:28.597 rmmod nvme_keyring 00:24:28.597 10:54:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:28.597 10:54:45 -- nvmf/common.sh@123 -- # set -e 00:24:28.597 10:54:45 -- nvmf/common.sh@124 -- # return 0 00:24:28.597 10:54:45 -- nvmf/common.sh@477 -- # '[' -n 3523524 ']' 00:24:28.597 10:54:45 -- nvmf/common.sh@478 -- # killprocess 3523524 00:24:28.597 10:54:45 -- common/autotest_common.sh@926 -- # '[' -z 3523524 ']' 00:24:28.597 10:54:45 -- common/autotest_common.sh@930 -- # kill -0 3523524 00:24:28.597 10:54:45 -- common/autotest_common.sh@931 -- # uname 00:24:28.597 10:54:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:28.597 10:54:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3523524 00:24:28.597 10:54:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:28.597 10:54:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:28.597 10:54:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3523524' 00:24:28.597 killing process with pid 3523524 00:24:28.597 10:54:45 -- common/autotest_common.sh@945 -- # kill 3523524 00:24:28.597 10:54:45 -- common/autotest_common.sh@950 -- # wait 3523524 00:24:28.855 10:54:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:28.855 10:54:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:28.855 10:54:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:28.855 10:54:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.855 10:54:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:28.855 10:54:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.855 10:54:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.855 10:54:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.384 10:54:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:31.384 10:54:47 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:31.384 10:54:47 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:31.642 10:54:48 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:33.542 10:54:50 -- target/perf_adq.sh@54 -- # sleep 5 00:24:38.812 10:54:55 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:38.812 10:54:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:38.812 10:54:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.812 10:54:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:38.812 10:54:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:38.812 10:54:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:38.812 10:54:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.812 10:54:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.812 10:54:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.812 10:54:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:38.812 10:54:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:38.812 10:54:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:38.812 10:54:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.812 10:54:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:38.812 10:54:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:38.812 10:54:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:38.812 10:54:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:38.812 10:54:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:38.812 10:54:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:38.812 10:54:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:38.812 10:54:55 -- nvmf/common.sh@294 -- # net_devs=() 00:24:38.812 10:54:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:38.812 10:54:55 -- nvmf/common.sh@295 -- # e810=() 00:24:38.812 10:54:55 -- nvmf/common.sh@295 -- # local -ga e810 00:24:38.812 10:54:55 -- nvmf/common.sh@296 -- # x722=() 00:24:38.812 10:54:55 -- nvmf/common.sh@296 -- # local -ga x722 00:24:38.812 10:54:55 -- nvmf/common.sh@297 -- # mlx=() 00:24:38.812 10:54:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:38.812 10:54:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.812 10:54:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.812 10:54:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.812 10:54:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.812 10:54:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.812 10:54:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.812 10:54:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.812 10:54:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.812 10:54:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.812 10:54:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.812 10:54:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.812 10:54:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:38.812 10:54:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:38.812 10:54:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:38.812 10:54:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:38.812 10:54:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:38.812 10:54:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:38.813 10:54:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:38.813 10:54:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:38.813 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:38.813 10:54:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:38.813 10:54:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:38.813 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:38.813 10:54:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:38.813 10:54:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:38.813 10:54:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.813 10:54:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:38.813 10:54:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.813 10:54:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:38.813 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:38.813 10:54:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.813 10:54:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:38.813 10:54:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.813 10:54:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:38.813 10:54:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.813 10:54:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:38.813 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:38.813 10:54:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.813 10:54:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:38.813 10:54:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:38.813 10:54:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:38.813 10:54:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.813 10:54:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.813 10:54:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.813 10:54:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:38.813 10:54:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.813 10:54:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.813 10:54:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:38.813 10:54:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.813 10:54:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.813 10:54:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:38.813 10:54:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:38.813 10:54:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.813 10:54:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.813 10:54:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.813 10:54:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.813 10:54:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:38.813 10:54:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.813 10:54:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.813 10:54:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.813 10:54:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:38.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:24:38.813 00:24:38.813 --- 10.0.0.2 ping statistics --- 00:24:38.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.813 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:24:38.813 10:54:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:24:38.813 00:24:38.813 --- 10.0.0.1 ping statistics --- 00:24:38.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.813 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:24:38.813 10:54:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.813 10:54:55 -- nvmf/common.sh@410 -- # return 0 00:24:38.813 10:54:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:38.813 10:54:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.813 10:54:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:38.813 10:54:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.813 10:54:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:38.813 10:54:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:38.813 10:54:55 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:38.813 10:54:55 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:38.813 10:54:55 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:38.813 10:54:55 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:38.813 net.core.busy_poll = 1 00:24:38.813 10:54:55 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:38.813 net.core.busy_read = 1 00:24:38.813 10:54:55 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:38.813 10:54:55 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:38.813 10:54:55 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:38.813 10:54:55 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:38.813 10:54:55 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:38.813 10:54:55 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:38.813 10:54:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:38.813 10:54:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:38.813 10:54:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.813 10:54:55 -- nvmf/common.sh@469 -- # nvmfpid=3526267 00:24:38.813 10:54:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:38.813 10:54:55 -- nvmf/common.sh@470 -- # waitforlisten 3526267 00:24:38.813 10:54:55 -- common/autotest_common.sh@819 -- # '[' -z 3526267 ']' 00:24:38.813 10:54:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.813 10:54:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:38.813 10:54:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.813 10:54:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:38.813 10:54:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.813 [2024-07-10 10:54:55.633588] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:38.813 [2024-07-10 10:54:55.633682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.072 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.072 [2024-07-10 10:54:55.699585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.072 [2024-07-10 10:54:55.787676] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:39.072 [2024-07-10 10:54:55.787819] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.072 [2024-07-10 10:54:55.787838] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.072 [2024-07-10 10:54:55.787850] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.072 [2024-07-10 10:54:55.787898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.072 [2024-07-10 10:54:55.787955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.072 [2024-07-10 10:54:55.788021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.072 [2024-07-10 10:54:55.788023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.072 10:54:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:39.072 10:54:55 -- common/autotest_common.sh@852 -- # return 0 00:24:39.072 10:54:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:39.072 10:54:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:39.072 10:54:55 -- common/autotest_common.sh@10 -- # set +x 00:24:39.072 10:54:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.072 10:54:55 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:39.072 10:54:55 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:39.072 10:54:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:39.072 10:54:55 -- common/autotest_common.sh@10 -- # set +x 00:24:39.072 10:54:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:39.072 10:54:55 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:39.072 10:54:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:39.072 10:54:55 -- common/autotest_common.sh@10 -- # set +x 00:24:39.330 10:54:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:39.330 10:54:55 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:39.330 10:54:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:39.330 10:54:55 -- common/autotest_common.sh@10 -- # set +x 00:24:39.330 [2024-07-10 10:54:55.985243] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.330 10:54:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:39.330 10:54:55 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:39.330 10:54:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:39.330 10:54:55 -- common/autotest_common.sh@10 -- # set +x 00:24:39.330 Malloc1 00:24:39.330 10:54:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:39.330 10:54:56 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.330 10:54:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:39.330 10:54:56 -- common/autotest_common.sh@10 -- # set +x 00:24:39.330 10:54:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:39.330 10:54:56 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:39.330 10:54:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:39.330 10:54:56 -- common/autotest_common.sh@10 -- # set +x 00:24:39.330 10:54:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:39.330 10:54:56 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.330 10:54:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:39.330 10:54:56 -- common/autotest_common.sh@10 -- # set +x 00:24:39.330 [2024-07-10 10:54:56.038455] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.330 10:54:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:39.330 10:54:56 -- target/perf_adq.sh@94 -- # perfpid=3526289 00:24:39.330 10:54:56 -- target/perf_adq.sh@95 -- # sleep 2 00:24:39.330 10:54:56 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:39.330 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.231 10:54:58 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:41.231 10:54:58 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:41.231 10:54:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.231 10:54:58 -- common/autotest_common.sh@10 -- # set +x 00:24:41.231 10:54:58 -- target/perf_adq.sh@97 -- # wc -l 00:24:41.489 10:54:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.489 10:54:58 -- target/perf_adq.sh@97 -- # count=2 00:24:41.489 10:54:58 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:24:41.489 10:54:58 -- target/perf_adq.sh@103 -- # wait 3526289 00:24:49.597 Initializing NVMe Controllers 00:24:49.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:49.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:49.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:49.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:49.597 Initialization complete. Launching workers. 00:24:49.597 ======================================================== 00:24:49.597 Latency(us) 00:24:49.597 Device Information : IOPS MiB/s Average min max 00:24:49.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5142.70 20.09 12448.10 2165.32 58901.05 00:24:49.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13984.30 54.63 4576.54 1278.73 7713.98 00:24:49.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5421.40 21.18 11805.41 1367.36 59643.08 00:24:49.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4797.40 18.74 13340.72 1536.67 60228.47 00:24:49.597 ======================================================== 00:24:49.597 Total : 29345.80 114.63 8724.22 1278.73 60228.47 00:24:49.597 00:24:49.597 10:55:06 -- target/perf_adq.sh@104 -- # nvmftestfini 00:24:49.597 10:55:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:49.597 10:55:06 -- nvmf/common.sh@116 -- # sync 00:24:49.597 10:55:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:49.597 10:55:06 -- nvmf/common.sh@119 -- # set +e 00:24:49.597 10:55:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:49.597 10:55:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:49.597 rmmod nvme_tcp 00:24:49.597 rmmod nvme_fabrics 00:24:49.597 rmmod nvme_keyring 00:24:49.597 10:55:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:49.597 10:55:06 -- nvmf/common.sh@123 -- # set -e 00:24:49.597 10:55:06 -- nvmf/common.sh@124 -- # return 0 00:24:49.597 10:55:06 -- nvmf/common.sh@477 -- # '[' -n 3526267 ']' 00:24:49.597 10:55:06 -- nvmf/common.sh@478 -- # killprocess 3526267 00:24:49.597 10:55:06 -- common/autotest_common.sh@926 -- # '[' -z 3526267 ']' 00:24:49.597 10:55:06 -- common/autotest_common.sh@930 -- # kill -0 3526267 00:24:49.597 10:55:06 -- common/autotest_common.sh@931 -- # uname 00:24:49.597 10:55:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:49.597 10:55:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3526267 00:24:49.597 10:55:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:49.597 10:55:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:49.597 10:55:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3526267' 00:24:49.597 killing process with pid 3526267 00:24:49.597 10:55:06 -- common/autotest_common.sh@945 -- # kill 3526267 00:24:49.597 10:55:06 -- common/autotest_common.sh@950 -- # wait 3526267 00:24:49.855 10:55:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:49.855 10:55:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:49.855 10:55:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:49.855 10:55:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:49.855 10:55:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:49.855 10:55:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.855 10:55:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.855 10:55:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.138 10:55:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:53.138 10:55:09 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:53.138 00:24:53.138 real 0m44.817s 00:24:53.138 user 2m36.249s 00:24:53.138 sys 0m11.152s 00:24:53.138 10:55:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.138 10:55:09 -- common/autotest_common.sh@10 -- # set +x 00:24:53.138 ************************************ 00:24:53.139 END TEST nvmf_perf_adq 00:24:53.139 ************************************ 00:24:53.139 10:55:09 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:53.139 10:55:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:53.139 10:55:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:53.139 10:55:09 -- common/autotest_common.sh@10 -- # set +x 00:24:53.139 ************************************ 00:24:53.139 START TEST nvmf_shutdown 00:24:53.139 ************************************ 00:24:53.139 10:55:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:53.139 * Looking for test storage... 00:24:53.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:53.139 10:55:09 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.139 10:55:09 -- nvmf/common.sh@7 -- # uname -s 00:24:53.139 10:55:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.139 10:55:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.139 10:55:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.139 10:55:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.139 10:55:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.139 10:55:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.139 10:55:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.139 10:55:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.139 10:55:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.139 10:55:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.139 10:55:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.139 10:55:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.139 10:55:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.139 10:55:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.139 10:55:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.139 10:55:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.139 10:55:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.139 10:55:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.139 10:55:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.139 10:55:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.139 10:55:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.139 10:55:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.139 10:55:09 -- paths/export.sh@5 -- # export PATH 00:24:53.139 10:55:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.139 10:55:09 -- nvmf/common.sh@46 -- # : 0 00:24:53.139 10:55:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:53.139 10:55:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:53.139 10:55:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:53.139 10:55:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.139 10:55:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.139 10:55:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:53.139 10:55:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:53.139 10:55:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:53.139 10:55:09 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.139 10:55:09 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.139 10:55:09 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:53.139 10:55:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:53.139 10:55:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:53.139 10:55:09 -- common/autotest_common.sh@10 -- # set +x 00:24:53.139 ************************************ 00:24:53.139 START TEST nvmf_shutdown_tc1 00:24:53.139 ************************************ 00:24:53.139 10:55:09 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:24:53.139 10:55:09 -- target/shutdown.sh@74 -- # starttarget 00:24:53.139 10:55:09 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:53.139 10:55:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:53.139 10:55:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.139 10:55:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:53.139 10:55:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:53.139 10:55:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:53.139 10:55:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.139 10:55:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.139 10:55:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.139 10:55:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:53.139 10:55:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:53.139 10:55:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:53.139 10:55:09 -- common/autotest_common.sh@10 -- # set +x 00:24:55.039 10:55:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:55.040 10:55:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:55.040 10:55:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:55.040 10:55:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:55.040 10:55:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:55.040 10:55:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:55.040 10:55:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:55.040 10:55:11 -- nvmf/common.sh@294 -- # net_devs=() 00:24:55.040 10:55:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:55.040 10:55:11 -- nvmf/common.sh@295 -- # e810=() 00:24:55.040 10:55:11 -- nvmf/common.sh@295 -- # local -ga e810 00:24:55.040 10:55:11 -- nvmf/common.sh@296 -- # x722=() 00:24:55.040 10:55:11 -- nvmf/common.sh@296 -- # local -ga x722 00:24:55.040 10:55:11 -- nvmf/common.sh@297 -- # mlx=() 00:24:55.040 10:55:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:55.040 10:55:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.040 10:55:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.040 10:55:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.040 10:55:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.040 10:55:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.040 10:55:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.040 10:55:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.040 10:55:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.040 10:55:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.040 10:55:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.040 10:55:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.040 10:55:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:55.040 10:55:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:55.040 10:55:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:55.040 10:55:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:55.040 10:55:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:55.040 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:55.040 10:55:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:55.040 10:55:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:55.040 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:55.040 10:55:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:55.040 10:55:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:55.040 10:55:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.040 10:55:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:55.040 10:55:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.040 10:55:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:55.040 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:55.040 10:55:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.040 10:55:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:55.040 10:55:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.040 10:55:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:55.040 10:55:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.040 10:55:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:55.040 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:55.040 10:55:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.040 10:55:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:55.040 10:55:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:55.040 10:55:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:55.040 10:55:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.040 10:55:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.040 10:55:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.040 10:55:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:55.040 10:55:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.040 10:55:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.040 10:55:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:55.040 10:55:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.040 10:55:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.040 10:55:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:55.040 10:55:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:55.040 10:55:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.040 10:55:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.040 10:55:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.040 10:55:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.040 10:55:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:55.040 10:55:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.040 10:55:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.040 10:55:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.040 10:55:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:55.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:24:55.040 00:24:55.040 --- 10.0.0.2 ping statistics --- 00:24:55.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.040 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:55.040 10:55:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:24:55.040 00:24:55.040 --- 10.0.0.1 ping statistics --- 00:24:55.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.040 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:24:55.040 10:55:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.040 10:55:11 -- nvmf/common.sh@410 -- # return 0 00:24:55.040 10:55:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:55.040 10:55:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.040 10:55:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:55.040 10:55:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.040 10:55:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:55.040 10:55:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:55.040 10:55:11 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:55.040 10:55:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:55.040 10:55:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:55.040 10:55:11 -- common/autotest_common.sh@10 -- # set +x 00:24:55.040 10:55:11 -- nvmf/common.sh@469 -- # nvmfpid=3529634 00:24:55.040 10:55:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:55.040 10:55:11 -- nvmf/common.sh@470 -- # waitforlisten 3529634 00:24:55.040 10:55:11 -- common/autotest_common.sh@819 -- # '[' -z 3529634 ']' 00:24:55.040 10:55:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.040 10:55:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:55.040 10:55:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.040 10:55:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:55.040 10:55:11 -- common/autotest_common.sh@10 -- # set +x 00:24:55.299 [2024-07-10 10:55:11.884113] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:55.299 [2024-07-10 10:55:11.884188] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.299 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.299 [2024-07-10 10:55:11.950843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.299 [2024-07-10 10:55:12.037531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:55.299 [2024-07-10 10:55:12.037704] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.299 [2024-07-10 10:55:12.037721] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.299 [2024-07-10 10:55:12.037733] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.299 [2024-07-10 10:55:12.037795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.299 [2024-07-10 10:55:12.037855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.299 [2024-07-10 10:55:12.037903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:55.299 [2024-07-10 10:55:12.037905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.233 10:55:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:56.233 10:55:12 -- common/autotest_common.sh@852 -- # return 0 00:24:56.233 10:55:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:56.233 10:55:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:56.233 10:55:12 -- common/autotest_common.sh@10 -- # set +x 00:24:56.233 10:55:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.233 10:55:12 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:56.233 10:55:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.233 10:55:12 -- common/autotest_common.sh@10 -- # set +x 00:24:56.233 [2024-07-10 10:55:12.859091] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.233 10:55:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.233 10:55:12 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:56.233 10:55:12 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:56.233 10:55:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:56.233 10:55:12 -- common/autotest_common.sh@10 -- # set +x 00:24:56.233 10:55:12 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:56.233 10:55:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.233 10:55:12 -- target/shutdown.sh@28 -- # cat 00:24:56.233 10:55:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.233 10:55:12 -- target/shutdown.sh@28 -- # cat 00:24:56.233 10:55:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.233 10:55:12 -- target/shutdown.sh@28 -- # cat 00:24:56.233 10:55:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.233 10:55:12 -- target/shutdown.sh@28 -- # cat 00:24:56.233 10:55:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.233 10:55:12 -- target/shutdown.sh@28 -- # cat 00:24:56.233 10:55:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.233 10:55:12 -- target/shutdown.sh@28 -- # cat 00:24:56.233 10:55:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.233 10:55:12 -- target/shutdown.sh@28 -- # cat 00:24:56.233 10:55:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.233 10:55:12 -- target/shutdown.sh@28 -- # cat 00:24:56.233 10:55:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.233 10:55:12 -- target/shutdown.sh@28 -- # cat 00:24:56.233 10:55:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.233 10:55:12 -- target/shutdown.sh@28 -- # cat 00:24:56.233 10:55:12 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:56.233 10:55:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.233 10:55:12 -- common/autotest_common.sh@10 -- # set +x 00:24:56.233 Malloc1 00:24:56.233 [2024-07-10 10:55:12.947548] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.233 Malloc2 00:24:56.233 Malloc3 00:24:56.490 Malloc4 00:24:56.490 Malloc5 00:24:56.490 Malloc6 00:24:56.490 Malloc7 00:24:56.490 Malloc8 00:24:56.748 Malloc9 00:24:56.748 Malloc10 00:24:56.748 10:55:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.748 10:55:13 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:56.748 10:55:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:56.748 10:55:13 -- common/autotest_common.sh@10 -- # set +x 00:24:56.748 10:55:13 -- target/shutdown.sh@78 -- # perfpid=3529942 00:24:56.748 10:55:13 -- target/shutdown.sh@79 -- # waitforlisten 3529942 /var/tmp/bdevperf.sock 00:24:56.748 10:55:13 -- common/autotest_common.sh@819 -- # '[' -z 3529942 ']' 00:24:56.748 10:55:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.748 10:55:13 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:56.748 10:55:13 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:56.748 10:55:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:56.748 10:55:13 -- nvmf/common.sh@520 -- # config=() 00:24:56.748 10:55:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.748 10:55:13 -- nvmf/common.sh@520 -- # local subsystem config 00:24:56.748 10:55:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:56.748 10:55:13 -- common/autotest_common.sh@10 -- # set +x 00:24:56.748 10:55:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.748 10:55:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.748 { 00:24:56.748 "params": { 00:24:56.748 "name": "Nvme$subsystem", 00:24:56.748 "trtype": "$TEST_TRANSPORT", 00:24:56.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.748 "adrfam": "ipv4", 00:24:56.748 "trsvcid": "$NVMF_PORT", 00:24:56.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.748 "hdgst": ${hdgst:-false}, 00:24:56.748 "ddgst": ${ddgst:-false} 00:24:56.748 }, 00:24:56.748 "method": "bdev_nvme_attach_controller" 00:24:56.748 } 00:24:56.748 EOF 00:24:56.748 )") 00:24:56.748 10:55:13 -- nvmf/common.sh@542 -- # cat 00:24:56.748 10:55:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.748 10:55:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.748 { 00:24:56.748 "params": { 00:24:56.748 "name": "Nvme$subsystem", 00:24:56.748 "trtype": "$TEST_TRANSPORT", 00:24:56.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.748 "adrfam": "ipv4", 00:24:56.748 "trsvcid": "$NVMF_PORT", 00:24:56.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.748 "hdgst": ${hdgst:-false}, 00:24:56.748 "ddgst": ${ddgst:-false} 00:24:56.748 }, 00:24:56.748 "method": "bdev_nvme_attach_controller" 00:24:56.748 } 00:24:56.748 EOF 00:24:56.748 )") 00:24:56.748 10:55:13 -- nvmf/common.sh@542 -- # cat 00:24:56.748 10:55:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.748 10:55:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.748 { 00:24:56.748 "params": { 00:24:56.748 "name": "Nvme$subsystem", 00:24:56.748 "trtype": "$TEST_TRANSPORT", 00:24:56.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.748 "adrfam": "ipv4", 00:24:56.748 "trsvcid": "$NVMF_PORT", 00:24:56.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.748 "hdgst": ${hdgst:-false}, 00:24:56.748 "ddgst": ${ddgst:-false} 00:24:56.748 }, 00:24:56.748 "method": "bdev_nvme_attach_controller" 00:24:56.748 } 00:24:56.748 EOF 00:24:56.748 )") 00:24:56.748 10:55:13 -- nvmf/common.sh@542 -- # cat 00:24:56.748 10:55:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.748 10:55:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.748 { 00:24:56.748 "params": { 00:24:56.748 "name": "Nvme$subsystem", 00:24:56.748 "trtype": "$TEST_TRANSPORT", 00:24:56.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.748 "adrfam": "ipv4", 00:24:56.748 "trsvcid": "$NVMF_PORT", 00:24:56.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.748 "hdgst": ${hdgst:-false}, 00:24:56.748 "ddgst": ${ddgst:-false} 00:24:56.748 }, 00:24:56.748 "method": "bdev_nvme_attach_controller" 00:24:56.748 } 00:24:56.748 EOF 00:24:56.748 )") 00:24:56.748 10:55:13 -- nvmf/common.sh@542 -- # cat 00:24:56.748 10:55:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.748 10:55:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.748 { 00:24:56.748 "params": { 00:24:56.748 "name": "Nvme$subsystem", 00:24:56.748 "trtype": "$TEST_TRANSPORT", 00:24:56.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.748 "adrfam": "ipv4", 00:24:56.748 "trsvcid": "$NVMF_PORT", 00:24:56.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.748 "hdgst": ${hdgst:-false}, 00:24:56.749 "ddgst": ${ddgst:-false} 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 } 00:24:56.749 EOF 00:24:56.749 )") 00:24:56.749 10:55:13 -- nvmf/common.sh@542 -- # cat 00:24:56.749 10:55:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.749 10:55:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.749 { 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme$subsystem", 00:24:56.749 "trtype": "$TEST_TRANSPORT", 00:24:56.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "$NVMF_PORT", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.749 "hdgst": ${hdgst:-false}, 00:24:56.749 "ddgst": ${ddgst:-false} 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 } 00:24:56.749 EOF 00:24:56.749 )") 00:24:56.749 10:55:13 -- nvmf/common.sh@542 -- # cat 00:24:56.749 10:55:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.749 10:55:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.749 { 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme$subsystem", 00:24:56.749 "trtype": "$TEST_TRANSPORT", 00:24:56.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "$NVMF_PORT", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.749 "hdgst": ${hdgst:-false}, 00:24:56.749 "ddgst": ${ddgst:-false} 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 } 00:24:56.749 EOF 00:24:56.749 )") 00:24:56.749 10:55:13 -- nvmf/common.sh@542 -- # cat 00:24:56.749 10:55:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.749 10:55:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.749 { 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme$subsystem", 00:24:56.749 "trtype": "$TEST_TRANSPORT", 00:24:56.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "$NVMF_PORT", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.749 "hdgst": ${hdgst:-false}, 00:24:56.749 "ddgst": ${ddgst:-false} 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 } 00:24:56.749 EOF 00:24:56.749 )") 00:24:56.749 10:55:13 -- nvmf/common.sh@542 -- # cat 00:24:56.749 10:55:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.749 10:55:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.749 { 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme$subsystem", 00:24:56.749 "trtype": "$TEST_TRANSPORT", 00:24:56.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "$NVMF_PORT", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.749 "hdgst": ${hdgst:-false}, 00:24:56.749 "ddgst": ${ddgst:-false} 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 } 00:24:56.749 EOF 00:24:56.749 )") 00:24:56.749 10:55:13 -- nvmf/common.sh@542 -- # cat 00:24:56.749 10:55:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.749 10:55:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.749 { 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme$subsystem", 00:24:56.749 "trtype": "$TEST_TRANSPORT", 00:24:56.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "$NVMF_PORT", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.749 "hdgst": ${hdgst:-false}, 00:24:56.749 "ddgst": ${ddgst:-false} 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 } 00:24:56.749 EOF 00:24:56.749 )") 00:24:56.749 10:55:13 -- nvmf/common.sh@542 -- # cat 00:24:56.749 10:55:13 -- nvmf/common.sh@544 -- # jq . 00:24:56.749 10:55:13 -- nvmf/common.sh@545 -- # IFS=, 00:24:56.749 10:55:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme1", 00:24:56.749 "trtype": "tcp", 00:24:56.749 "traddr": "10.0.0.2", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "4420", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.749 "hdgst": false, 00:24:56.749 "ddgst": false 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 },{ 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme2", 00:24:56.749 "trtype": "tcp", 00:24:56.749 "traddr": "10.0.0.2", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "4420", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:56.749 "hdgst": false, 00:24:56.749 "ddgst": false 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 },{ 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme3", 00:24:56.749 "trtype": "tcp", 00:24:56.749 "traddr": "10.0.0.2", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "4420", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:56.749 "hdgst": false, 00:24:56.749 "ddgst": false 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 },{ 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme4", 00:24:56.749 "trtype": "tcp", 00:24:56.749 "traddr": "10.0.0.2", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "4420", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:56.749 "hdgst": false, 00:24:56.749 "ddgst": false 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 },{ 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme5", 00:24:56.749 "trtype": "tcp", 00:24:56.749 "traddr": "10.0.0.2", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "4420", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:56.749 "hdgst": false, 00:24:56.749 "ddgst": false 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 },{ 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme6", 00:24:56.749 "trtype": "tcp", 00:24:56.749 "traddr": "10.0.0.2", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "4420", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:56.749 "hdgst": false, 00:24:56.749 "ddgst": false 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 },{ 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme7", 00:24:56.749 "trtype": "tcp", 00:24:56.749 "traddr": "10.0.0.2", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "4420", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:56.749 "hdgst": false, 00:24:56.749 "ddgst": false 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 },{ 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme8", 00:24:56.749 "trtype": "tcp", 00:24:56.749 "traddr": "10.0.0.2", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "4420", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:56.749 "hdgst": false, 00:24:56.749 "ddgst": false 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 },{ 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme9", 00:24:56.749 "trtype": "tcp", 00:24:56.749 "traddr": "10.0.0.2", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "4420", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:56.749 "hdgst": false, 00:24:56.749 "ddgst": false 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 },{ 00:24:56.749 "params": { 00:24:56.749 "name": "Nvme10", 00:24:56.749 "trtype": "tcp", 00:24:56.749 "traddr": "10.0.0.2", 00:24:56.749 "adrfam": "ipv4", 00:24:56.749 "trsvcid": "4420", 00:24:56.749 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:56.749 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:56.749 "hdgst": false, 00:24:56.749 "ddgst": false 00:24:56.749 }, 00:24:56.749 "method": "bdev_nvme_attach_controller" 00:24:56.749 }' 00:24:56.749 [2024-07-10 10:55:13.446009] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:56.749 [2024-07-10 10:55:13.446098] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:56.749 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.749 [2024-07-10 10:55:13.509244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.006 [2024-07-10 10:55:13.594774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.532 10:55:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:59.532 10:55:15 -- common/autotest_common.sh@852 -- # return 0 00:24:59.532 10:55:15 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:59.532 10:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:59.532 10:55:15 -- common/autotest_common.sh@10 -- # set +x 00:24:59.532 10:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:59.532 10:55:15 -- target/shutdown.sh@83 -- # kill -9 3529942 00:24:59.532 10:55:15 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:59.532 10:55:15 -- target/shutdown.sh@87 -- # sleep 1 00:25:00.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3529942 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:00.097 10:55:16 -- target/shutdown.sh@88 -- # kill -0 3529634 00:25:00.097 10:55:16 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:00.097 10:55:16 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:00.097 10:55:16 -- nvmf/common.sh@520 -- # config=() 00:25:00.097 10:55:16 -- nvmf/common.sh@520 -- # local subsystem config 00:25:00.097 10:55:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:00.097 { 00:25:00.097 "params": { 00:25:00.097 "name": "Nvme$subsystem", 00:25:00.097 "trtype": "$TEST_TRANSPORT", 00:25:00.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.097 "adrfam": "ipv4", 00:25:00.097 "trsvcid": "$NVMF_PORT", 00:25:00.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.097 "hdgst": ${hdgst:-false}, 00:25:00.097 "ddgst": ${ddgst:-false} 00:25:00.097 }, 00:25:00.097 "method": "bdev_nvme_attach_controller" 00:25:00.097 } 00:25:00.097 EOF 00:25:00.097 )") 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # cat 00:25:00.097 10:55:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:00.097 { 00:25:00.097 "params": { 00:25:00.097 "name": "Nvme$subsystem", 00:25:00.097 "trtype": "$TEST_TRANSPORT", 00:25:00.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.097 "adrfam": "ipv4", 00:25:00.097 "trsvcid": "$NVMF_PORT", 00:25:00.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.097 "hdgst": ${hdgst:-false}, 00:25:00.097 "ddgst": ${ddgst:-false} 00:25:00.097 }, 00:25:00.097 "method": "bdev_nvme_attach_controller" 00:25:00.097 } 00:25:00.097 EOF 00:25:00.097 )") 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # cat 00:25:00.097 10:55:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:00.097 { 00:25:00.097 "params": { 00:25:00.097 "name": "Nvme$subsystem", 00:25:00.097 "trtype": "$TEST_TRANSPORT", 00:25:00.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.097 "adrfam": "ipv4", 00:25:00.097 "trsvcid": "$NVMF_PORT", 00:25:00.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.097 "hdgst": ${hdgst:-false}, 00:25:00.097 "ddgst": ${ddgst:-false} 00:25:00.097 }, 00:25:00.097 "method": "bdev_nvme_attach_controller" 00:25:00.097 } 00:25:00.097 EOF 00:25:00.097 )") 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # cat 00:25:00.097 10:55:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:00.097 { 00:25:00.097 "params": { 00:25:00.097 "name": "Nvme$subsystem", 00:25:00.097 "trtype": "$TEST_TRANSPORT", 00:25:00.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.097 "adrfam": "ipv4", 00:25:00.097 "trsvcid": "$NVMF_PORT", 00:25:00.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.097 "hdgst": ${hdgst:-false}, 00:25:00.097 "ddgst": ${ddgst:-false} 00:25:00.097 }, 00:25:00.097 "method": "bdev_nvme_attach_controller" 00:25:00.097 } 00:25:00.097 EOF 00:25:00.097 )") 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # cat 00:25:00.097 10:55:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:00.097 { 00:25:00.097 "params": { 00:25:00.097 "name": "Nvme$subsystem", 00:25:00.097 "trtype": "$TEST_TRANSPORT", 00:25:00.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.097 "adrfam": "ipv4", 00:25:00.097 "trsvcid": "$NVMF_PORT", 00:25:00.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.097 "hdgst": ${hdgst:-false}, 00:25:00.097 "ddgst": ${ddgst:-false} 00:25:00.097 }, 00:25:00.097 "method": "bdev_nvme_attach_controller" 00:25:00.097 } 00:25:00.097 EOF 00:25:00.097 )") 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # cat 00:25:00.097 10:55:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:00.097 { 00:25:00.097 "params": { 00:25:00.097 "name": "Nvme$subsystem", 00:25:00.097 "trtype": "$TEST_TRANSPORT", 00:25:00.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.097 "adrfam": "ipv4", 00:25:00.097 "trsvcid": "$NVMF_PORT", 00:25:00.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.097 "hdgst": ${hdgst:-false}, 00:25:00.097 "ddgst": ${ddgst:-false} 00:25:00.097 }, 00:25:00.097 "method": "bdev_nvme_attach_controller" 00:25:00.097 } 00:25:00.097 EOF 00:25:00.097 )") 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # cat 00:25:00.097 10:55:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:00.097 { 00:25:00.097 "params": { 00:25:00.097 "name": "Nvme$subsystem", 00:25:00.097 "trtype": "$TEST_TRANSPORT", 00:25:00.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.097 "adrfam": "ipv4", 00:25:00.097 "trsvcid": "$NVMF_PORT", 00:25:00.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.097 "hdgst": ${hdgst:-false}, 00:25:00.097 "ddgst": ${ddgst:-false} 00:25:00.097 }, 00:25:00.097 "method": "bdev_nvme_attach_controller" 00:25:00.097 } 00:25:00.097 EOF 00:25:00.097 )") 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # cat 00:25:00.097 10:55:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:00.097 { 00:25:00.097 "params": { 00:25:00.097 "name": "Nvme$subsystem", 00:25:00.097 "trtype": "$TEST_TRANSPORT", 00:25:00.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.097 "adrfam": "ipv4", 00:25:00.097 "trsvcid": "$NVMF_PORT", 00:25:00.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.097 "hdgst": ${hdgst:-false}, 00:25:00.097 "ddgst": ${ddgst:-false} 00:25:00.097 }, 00:25:00.097 "method": "bdev_nvme_attach_controller" 00:25:00.097 } 00:25:00.097 EOF 00:25:00.097 )") 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # cat 00:25:00.097 10:55:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:00.097 { 00:25:00.097 "params": { 00:25:00.097 "name": "Nvme$subsystem", 00:25:00.097 "trtype": "$TEST_TRANSPORT", 00:25:00.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.097 "adrfam": "ipv4", 00:25:00.097 "trsvcid": "$NVMF_PORT", 00:25:00.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.097 "hdgst": ${hdgst:-false}, 00:25:00.097 "ddgst": ${ddgst:-false} 00:25:00.097 }, 00:25:00.097 "method": "bdev_nvme_attach_controller" 00:25:00.097 } 00:25:00.097 EOF 00:25:00.097 )") 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # cat 00:25:00.097 10:55:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:00.097 10:55:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:00.097 { 00:25:00.097 "params": { 00:25:00.097 "name": "Nvme$subsystem", 00:25:00.097 "trtype": "$TEST_TRANSPORT", 00:25:00.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.097 "adrfam": "ipv4", 00:25:00.098 "trsvcid": "$NVMF_PORT", 00:25:00.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.098 "hdgst": ${hdgst:-false}, 00:25:00.098 "ddgst": ${ddgst:-false} 00:25:00.098 }, 00:25:00.098 "method": "bdev_nvme_attach_controller" 00:25:00.098 } 00:25:00.098 EOF 00:25:00.098 )") 00:25:00.098 10:55:16 -- nvmf/common.sh@542 -- # cat 00:25:00.098 10:55:16 -- nvmf/common.sh@544 -- # jq . 00:25:00.098 10:55:16 -- nvmf/common.sh@545 -- # IFS=, 00:25:00.098 10:55:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:00.098 "params": { 00:25:00.098 "name": "Nvme1", 00:25:00.098 "trtype": "tcp", 00:25:00.098 "traddr": "10.0.0.2", 00:25:00.098 "adrfam": "ipv4", 00:25:00.098 "trsvcid": "4420", 00:25:00.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:00.098 "hdgst": false, 00:25:00.098 "ddgst": false 00:25:00.098 }, 00:25:00.098 "method": "bdev_nvme_attach_controller" 00:25:00.098 },{ 00:25:00.098 "params": { 00:25:00.098 "name": "Nvme2", 00:25:00.098 "trtype": "tcp", 00:25:00.098 "traddr": "10.0.0.2", 00:25:00.098 "adrfam": "ipv4", 00:25:00.098 "trsvcid": "4420", 00:25:00.098 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:00.098 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:00.098 "hdgst": false, 00:25:00.098 "ddgst": false 00:25:00.098 }, 00:25:00.098 "method": "bdev_nvme_attach_controller" 00:25:00.098 },{ 00:25:00.098 "params": { 00:25:00.098 "name": "Nvme3", 00:25:00.098 "trtype": "tcp", 00:25:00.098 "traddr": "10.0.0.2", 00:25:00.098 "adrfam": "ipv4", 00:25:00.098 "trsvcid": "4420", 00:25:00.098 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:00.098 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:00.098 "hdgst": false, 00:25:00.098 "ddgst": false 00:25:00.098 }, 00:25:00.098 "method": "bdev_nvme_attach_controller" 00:25:00.098 },{ 00:25:00.098 "params": { 00:25:00.098 "name": "Nvme4", 00:25:00.098 "trtype": "tcp", 00:25:00.098 "traddr": "10.0.0.2", 00:25:00.098 "adrfam": "ipv4", 00:25:00.098 "trsvcid": "4420", 00:25:00.098 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:00.098 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:00.098 "hdgst": false, 00:25:00.098 "ddgst": false 00:25:00.098 }, 00:25:00.098 "method": "bdev_nvme_attach_controller" 00:25:00.098 },{ 00:25:00.098 "params": { 00:25:00.098 "name": "Nvme5", 00:25:00.098 "trtype": "tcp", 00:25:00.098 "traddr": "10.0.0.2", 00:25:00.098 "adrfam": "ipv4", 00:25:00.098 "trsvcid": "4420", 00:25:00.098 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:00.098 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:00.098 "hdgst": false, 00:25:00.098 "ddgst": false 00:25:00.098 }, 00:25:00.098 "method": "bdev_nvme_attach_controller" 00:25:00.098 },{ 00:25:00.098 "params": { 00:25:00.098 "name": "Nvme6", 00:25:00.098 "trtype": "tcp", 00:25:00.098 "traddr": "10.0.0.2", 00:25:00.098 "adrfam": "ipv4", 00:25:00.098 "trsvcid": "4420", 00:25:00.098 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:00.098 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:00.098 "hdgst": false, 00:25:00.098 "ddgst": false 00:25:00.098 }, 00:25:00.098 "method": "bdev_nvme_attach_controller" 00:25:00.098 },{ 00:25:00.098 "params": { 00:25:00.098 "name": "Nvme7", 00:25:00.098 "trtype": "tcp", 00:25:00.098 "traddr": "10.0.0.2", 00:25:00.098 "adrfam": "ipv4", 00:25:00.098 "trsvcid": "4420", 00:25:00.098 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:00.098 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:00.098 "hdgst": false, 00:25:00.098 "ddgst": false 00:25:00.098 }, 00:25:00.098 "method": "bdev_nvme_attach_controller" 00:25:00.098 },{ 00:25:00.098 "params": { 00:25:00.098 "name": "Nvme8", 00:25:00.098 "trtype": "tcp", 00:25:00.098 "traddr": "10.0.0.2", 00:25:00.098 "adrfam": "ipv4", 00:25:00.098 "trsvcid": "4420", 00:25:00.098 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:00.098 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:00.098 "hdgst": false, 00:25:00.098 "ddgst": false 00:25:00.098 }, 00:25:00.098 "method": "bdev_nvme_attach_controller" 00:25:00.098 },{ 00:25:00.098 "params": { 00:25:00.098 "name": "Nvme9", 00:25:00.098 "trtype": "tcp", 00:25:00.098 "traddr": "10.0.0.2", 00:25:00.098 "adrfam": "ipv4", 00:25:00.098 "trsvcid": "4420", 00:25:00.098 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:00.098 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:00.098 "hdgst": false, 00:25:00.098 "ddgst": false 00:25:00.098 }, 00:25:00.098 "method": "bdev_nvme_attach_controller" 00:25:00.098 },{ 00:25:00.098 "params": { 00:25:00.098 "name": "Nvme10", 00:25:00.098 "trtype": "tcp", 00:25:00.098 "traddr": "10.0.0.2", 00:25:00.098 "adrfam": "ipv4", 00:25:00.098 "trsvcid": "4420", 00:25:00.098 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:00.098 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:00.098 "hdgst": false, 00:25:00.098 "ddgst": false 00:25:00.098 }, 00:25:00.098 "method": "bdev_nvme_attach_controller" 00:25:00.098 }' 00:25:00.098 [2024-07-10 10:55:16.901997] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:00.098 [2024-07-10 10:55:16.902085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530386 ] 00:25:00.356 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.356 [2024-07-10 10:55:16.966277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.356 [2024-07-10 10:55:17.050876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.729 Running I/O for 1 seconds... 00:25:03.103 00:25:03.103 Latency(us) 00:25:03.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.103 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.103 Verification LBA range: start 0x0 length 0x400 00:25:03.103 Nvme1n1 : 1.10 361.78 22.61 0.00 0.00 172855.99 33010.73 160004.93 00:25:03.103 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.103 Verification LBA range: start 0x0 length 0x400 00:25:03.104 Nvme2n1 : 1.07 404.50 25.28 0.00 0.00 154672.23 15049.01 139033.41 00:25:03.104 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.104 Verification LBA range: start 0x0 length 0x400 00:25:03.104 Nvme3n1 : 1.08 368.36 23.02 0.00 0.00 167090.45 23204.60 163888.55 00:25:03.104 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.104 Verification LBA range: start 0x0 length 0x400 00:25:03.104 Nvme4n1 : 1.07 403.83 25.24 0.00 0.00 152657.05 15340.28 128159.29 00:25:03.104 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.104 Verification LBA range: start 0x0 length 0x400 00:25:03.104 Nvme5n1 : 1.10 397.50 24.84 0.00 0.00 154360.88 6941.96 142917.03 00:25:03.104 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.104 Verification LBA range: start 0x0 length 0x400 00:25:03.104 Nvme6n1 : 1.08 401.04 25.07 0.00 0.00 151533.80 15437.37 116508.44 00:25:03.104 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.104 Verification LBA range: start 0x0 length 0x400 00:25:03.104 Nvme7n1 : 1.11 400.59 25.04 0.00 0.00 151008.13 10291.58 147577.36 00:25:03.104 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.104 Verification LBA range: start 0x0 length 0x400 00:25:03.104 Nvme8n1 : 1.11 435.54 27.22 0.00 0.00 138071.05 14272.28 119615.34 00:25:03.104 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.104 Verification LBA range: start 0x0 length 0x400 00:25:03.104 Nvme9n1 : 1.10 402.32 25.14 0.00 0.00 148367.75 10874.12 161558.38 00:25:03.104 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:03.104 Verification LBA range: start 0x0 length 0x400 00:25:03.104 Nvme10n1 : 1.10 394.86 24.68 0.00 0.00 149772.08 15049.01 121945.51 00:25:03.104 =================================================================================================================== 00:25:03.104 Total : 3970.33 248.15 0.00 0.00 153574.89 6941.96 163888.55 00:25:03.104 10:55:19 -- target/shutdown.sh@93 -- # stoptarget 00:25:03.104 10:55:19 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:03.104 10:55:19 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:03.104 10:55:19 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:03.104 10:55:19 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:03.104 10:55:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:03.104 10:55:19 -- nvmf/common.sh@116 -- # sync 00:25:03.104 10:55:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:03.104 10:55:19 -- nvmf/common.sh@119 -- # set +e 00:25:03.104 10:55:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:03.104 10:55:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:03.104 rmmod nvme_tcp 00:25:03.104 rmmod nvme_fabrics 00:25:03.104 rmmod nvme_keyring 00:25:03.104 10:55:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:03.104 10:55:19 -- nvmf/common.sh@123 -- # set -e 00:25:03.104 10:55:19 -- nvmf/common.sh@124 -- # return 0 00:25:03.104 10:55:19 -- nvmf/common.sh@477 -- # '[' -n 3529634 ']' 00:25:03.104 10:55:19 -- nvmf/common.sh@478 -- # killprocess 3529634 00:25:03.104 10:55:19 -- common/autotest_common.sh@926 -- # '[' -z 3529634 ']' 00:25:03.104 10:55:19 -- common/autotest_common.sh@930 -- # kill -0 3529634 00:25:03.104 10:55:19 -- common/autotest_common.sh@931 -- # uname 00:25:03.104 10:55:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:03.104 10:55:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3529634 00:25:03.362 10:55:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:03.362 10:55:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:03.362 10:55:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3529634' 00:25:03.362 killing process with pid 3529634 00:25:03.362 10:55:19 -- common/autotest_common.sh@945 -- # kill 3529634 00:25:03.362 10:55:19 -- common/autotest_common.sh@950 -- # wait 3529634 00:25:03.620 10:55:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:03.620 10:55:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:03.620 10:55:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:03.620 10:55:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.620 10:55:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:03.620 10:55:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.620 10:55:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.620 10:55:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.153 10:55:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:06.153 00:25:06.153 real 0m12.665s 00:25:06.153 user 0m38.529s 00:25:06.153 sys 0m3.319s 00:25:06.153 10:55:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:06.153 10:55:22 -- common/autotest_common.sh@10 -- # set +x 00:25:06.153 ************************************ 00:25:06.153 END TEST nvmf_shutdown_tc1 00:25:06.153 ************************************ 00:25:06.153 10:55:22 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:06.153 10:55:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:06.153 10:55:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:06.153 10:55:22 -- common/autotest_common.sh@10 -- # set +x 00:25:06.153 ************************************ 00:25:06.153 START TEST nvmf_shutdown_tc2 00:25:06.153 ************************************ 00:25:06.153 10:55:22 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:25:06.153 10:55:22 -- target/shutdown.sh@98 -- # starttarget 00:25:06.153 10:55:22 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:06.153 10:55:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:06.153 10:55:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.153 10:55:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:06.153 10:55:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:06.153 10:55:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:06.153 10:55:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.153 10:55:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:06.153 10:55:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.153 10:55:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:06.153 10:55:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:06.153 10:55:22 -- common/autotest_common.sh@10 -- # set +x 00:25:06.153 10:55:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:06.153 10:55:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:06.153 10:55:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:06.153 10:55:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:06.153 10:55:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:06.153 10:55:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:06.153 10:55:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:06.153 10:55:22 -- nvmf/common.sh@294 -- # net_devs=() 00:25:06.153 10:55:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:06.153 10:55:22 -- nvmf/common.sh@295 -- # e810=() 00:25:06.153 10:55:22 -- nvmf/common.sh@295 -- # local -ga e810 00:25:06.153 10:55:22 -- nvmf/common.sh@296 -- # x722=() 00:25:06.153 10:55:22 -- nvmf/common.sh@296 -- # local -ga x722 00:25:06.153 10:55:22 -- nvmf/common.sh@297 -- # mlx=() 00:25:06.153 10:55:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:06.153 10:55:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.153 10:55:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.153 10:55:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.153 10:55:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.153 10:55:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.153 10:55:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.153 10:55:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.153 10:55:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.153 10:55:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.153 10:55:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.153 10:55:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.153 10:55:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:06.153 10:55:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:06.153 10:55:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:06.153 10:55:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:06.153 10:55:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:06.153 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:06.153 10:55:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:06.153 10:55:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:06.153 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:06.153 10:55:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:06.153 10:55:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:06.153 10:55:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.153 10:55:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:06.153 10:55:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.153 10:55:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:06.153 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:06.153 10:55:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.153 10:55:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:06.153 10:55:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.153 10:55:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:06.153 10:55:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.153 10:55:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:06.153 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:06.153 10:55:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.153 10:55:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:06.153 10:55:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:06.153 10:55:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:06.153 10:55:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.153 10:55:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.153 10:55:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.153 10:55:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:06.153 10:55:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.153 10:55:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.153 10:55:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:06.153 10:55:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.153 10:55:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.153 10:55:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:06.153 10:55:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:06.153 10:55:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.153 10:55:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.153 10:55:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.153 10:55:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.153 10:55:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:06.153 10:55:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.153 10:55:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.153 10:55:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.153 10:55:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:06.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:25:06.153 00:25:06.153 --- 10.0.0.2 ping statistics --- 00:25:06.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.153 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:25:06.153 10:55:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:25:06.153 00:25:06.153 --- 10.0.0.1 ping statistics --- 00:25:06.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.153 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:06.153 10:55:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.153 10:55:22 -- nvmf/common.sh@410 -- # return 0 00:25:06.153 10:55:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:06.153 10:55:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.153 10:55:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:06.153 10:55:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.153 10:55:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:06.153 10:55:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:06.153 10:55:22 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:06.153 10:55:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:06.153 10:55:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:06.153 10:55:22 -- common/autotest_common.sh@10 -- # set +x 00:25:06.153 10:55:22 -- nvmf/common.sh@469 -- # nvmfpid=3531173 00:25:06.153 10:55:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:06.153 10:55:22 -- nvmf/common.sh@470 -- # waitforlisten 3531173 00:25:06.153 10:55:22 -- common/autotest_common.sh@819 -- # '[' -z 3531173 ']' 00:25:06.153 10:55:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.154 10:55:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:06.154 10:55:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.154 10:55:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:06.154 10:55:22 -- common/autotest_common.sh@10 -- # set +x 00:25:06.154 [2024-07-10 10:55:22.697495] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:06.154 [2024-07-10 10:55:22.697585] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.154 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.154 [2024-07-10 10:55:22.766577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.154 [2024-07-10 10:55:22.855752] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:06.154 [2024-07-10 10:55:22.855924] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.154 [2024-07-10 10:55:22.855945] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.154 [2024-07-10 10:55:22.855960] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.154 [2024-07-10 10:55:22.856061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.154 [2024-07-10 10:55:22.856158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.154 [2024-07-10 10:55:22.856223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:06.154 [2024-07-10 10:55:22.856227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.085 10:55:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:07.085 10:55:23 -- common/autotest_common.sh@852 -- # return 0 00:25:07.085 10:55:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:07.085 10:55:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:07.085 10:55:23 -- common/autotest_common.sh@10 -- # set +x 00:25:07.085 10:55:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.085 10:55:23 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:07.085 10:55:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.085 10:55:23 -- common/autotest_common.sh@10 -- # set +x 00:25:07.085 [2024-07-10 10:55:23.636953] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.085 10:55:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.085 10:55:23 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:07.085 10:55:23 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:07.085 10:55:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:07.085 10:55:23 -- common/autotest_common.sh@10 -- # set +x 00:25:07.085 10:55:23 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:07.085 10:55:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:07.085 10:55:23 -- target/shutdown.sh@28 -- # cat 00:25:07.085 10:55:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:07.085 10:55:23 -- target/shutdown.sh@28 -- # cat 00:25:07.085 10:55:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:07.085 10:55:23 -- target/shutdown.sh@28 -- # cat 00:25:07.085 10:55:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:07.085 10:55:23 -- target/shutdown.sh@28 -- # cat 00:25:07.085 10:55:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:07.085 10:55:23 -- target/shutdown.sh@28 -- # cat 00:25:07.085 10:55:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:07.085 10:55:23 -- target/shutdown.sh@28 -- # cat 00:25:07.085 10:55:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:07.085 10:55:23 -- target/shutdown.sh@28 -- # cat 00:25:07.085 10:55:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:07.085 10:55:23 -- target/shutdown.sh@28 -- # cat 00:25:07.085 10:55:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:07.085 10:55:23 -- target/shutdown.sh@28 -- # cat 00:25:07.085 10:55:23 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:07.085 10:55:23 -- target/shutdown.sh@28 -- # cat 00:25:07.085 10:55:23 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:07.085 10:55:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.085 10:55:23 -- common/autotest_common.sh@10 -- # set +x 00:25:07.085 Malloc1 00:25:07.085 [2024-07-10 10:55:23.712084] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.085 Malloc2 00:25:07.085 Malloc3 00:25:07.085 Malloc4 00:25:07.085 Malloc5 00:25:07.343 Malloc6 00:25:07.343 Malloc7 00:25:07.343 Malloc8 00:25:07.343 Malloc9 00:25:07.343 Malloc10 00:25:07.343 10:55:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.343 10:55:24 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:07.343 10:55:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:07.343 10:55:24 -- common/autotest_common.sh@10 -- # set +x 00:25:07.601 10:55:24 -- target/shutdown.sh@102 -- # perfpid=3531367 00:25:07.601 10:55:24 -- target/shutdown.sh@103 -- # waitforlisten 3531367 /var/tmp/bdevperf.sock 00:25:07.601 10:55:24 -- common/autotest_common.sh@819 -- # '[' -z 3531367 ']' 00:25:07.601 10:55:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.601 10:55:24 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:07.601 10:55:24 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:07.601 10:55:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:07.601 10:55:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.601 10:55:24 -- nvmf/common.sh@520 -- # config=() 00:25:07.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.601 10:55:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:07.601 10:55:24 -- nvmf/common.sh@520 -- # local subsystem config 00:25:07.601 10:55:24 -- common/autotest_common.sh@10 -- # set +x 00:25:07.601 10:55:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.601 { 00:25:07.601 "params": { 00:25:07.601 "name": "Nvme$subsystem", 00:25:07.601 "trtype": "$TEST_TRANSPORT", 00:25:07.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.601 "adrfam": "ipv4", 00:25:07.601 "trsvcid": "$NVMF_PORT", 00:25:07.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.601 "hdgst": ${hdgst:-false}, 00:25:07.601 "ddgst": ${ddgst:-false} 00:25:07.601 }, 00:25:07.601 "method": "bdev_nvme_attach_controller" 00:25:07.601 } 00:25:07.601 EOF 00:25:07.601 )") 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # cat 00:25:07.601 10:55:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.601 { 00:25:07.601 "params": { 00:25:07.601 "name": "Nvme$subsystem", 00:25:07.601 "trtype": "$TEST_TRANSPORT", 00:25:07.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.601 "adrfam": "ipv4", 00:25:07.601 "trsvcid": "$NVMF_PORT", 00:25:07.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.601 "hdgst": ${hdgst:-false}, 00:25:07.601 "ddgst": ${ddgst:-false} 00:25:07.601 }, 00:25:07.601 "method": "bdev_nvme_attach_controller" 00:25:07.601 } 00:25:07.601 EOF 00:25:07.601 )") 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # cat 00:25:07.601 10:55:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.601 { 00:25:07.601 "params": { 00:25:07.601 "name": "Nvme$subsystem", 00:25:07.601 "trtype": "$TEST_TRANSPORT", 00:25:07.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.601 "adrfam": "ipv4", 00:25:07.601 "trsvcid": "$NVMF_PORT", 00:25:07.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.601 "hdgst": ${hdgst:-false}, 00:25:07.601 "ddgst": ${ddgst:-false} 00:25:07.601 }, 00:25:07.601 "method": "bdev_nvme_attach_controller" 00:25:07.601 } 00:25:07.601 EOF 00:25:07.601 )") 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # cat 00:25:07.601 10:55:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.601 { 00:25:07.601 "params": { 00:25:07.601 "name": "Nvme$subsystem", 00:25:07.601 "trtype": "$TEST_TRANSPORT", 00:25:07.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.601 "adrfam": "ipv4", 00:25:07.601 "trsvcid": "$NVMF_PORT", 00:25:07.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.601 "hdgst": ${hdgst:-false}, 00:25:07.601 "ddgst": ${ddgst:-false} 00:25:07.601 }, 00:25:07.601 "method": "bdev_nvme_attach_controller" 00:25:07.601 } 00:25:07.601 EOF 00:25:07.601 )") 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # cat 00:25:07.601 10:55:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.601 { 00:25:07.601 "params": { 00:25:07.601 "name": "Nvme$subsystem", 00:25:07.601 "trtype": "$TEST_TRANSPORT", 00:25:07.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.601 "adrfam": "ipv4", 00:25:07.601 "trsvcid": "$NVMF_PORT", 00:25:07.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.601 "hdgst": ${hdgst:-false}, 00:25:07.601 "ddgst": ${ddgst:-false} 00:25:07.601 }, 00:25:07.601 "method": "bdev_nvme_attach_controller" 00:25:07.601 } 00:25:07.601 EOF 00:25:07.601 )") 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # cat 00:25:07.601 10:55:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.601 { 00:25:07.601 "params": { 00:25:07.601 "name": "Nvme$subsystem", 00:25:07.601 "trtype": "$TEST_TRANSPORT", 00:25:07.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.601 "adrfam": "ipv4", 00:25:07.601 "trsvcid": "$NVMF_PORT", 00:25:07.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.601 "hdgst": ${hdgst:-false}, 00:25:07.601 "ddgst": ${ddgst:-false} 00:25:07.601 }, 00:25:07.601 "method": "bdev_nvme_attach_controller" 00:25:07.601 } 00:25:07.601 EOF 00:25:07.601 )") 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # cat 00:25:07.601 10:55:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.601 { 00:25:07.601 "params": { 00:25:07.601 "name": "Nvme$subsystem", 00:25:07.601 "trtype": "$TEST_TRANSPORT", 00:25:07.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.601 "adrfam": "ipv4", 00:25:07.601 "trsvcid": "$NVMF_PORT", 00:25:07.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.601 "hdgst": ${hdgst:-false}, 00:25:07.601 "ddgst": ${ddgst:-false} 00:25:07.601 }, 00:25:07.601 "method": "bdev_nvme_attach_controller" 00:25:07.601 } 00:25:07.601 EOF 00:25:07.601 )") 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # cat 00:25:07.601 10:55:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.601 { 00:25:07.601 "params": { 00:25:07.601 "name": "Nvme$subsystem", 00:25:07.601 "trtype": "$TEST_TRANSPORT", 00:25:07.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.601 "adrfam": "ipv4", 00:25:07.601 "trsvcid": "$NVMF_PORT", 00:25:07.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.601 "hdgst": ${hdgst:-false}, 00:25:07.601 "ddgst": ${ddgst:-false} 00:25:07.601 }, 00:25:07.601 "method": "bdev_nvme_attach_controller" 00:25:07.601 } 00:25:07.601 EOF 00:25:07.601 )") 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # cat 00:25:07.601 10:55:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.601 10:55:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.602 { 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme$subsystem", 00:25:07.602 "trtype": "$TEST_TRANSPORT", 00:25:07.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "$NVMF_PORT", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.602 "hdgst": ${hdgst:-false}, 00:25:07.602 "ddgst": ${ddgst:-false} 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 } 00:25:07.602 EOF 00:25:07.602 )") 00:25:07.602 10:55:24 -- nvmf/common.sh@542 -- # cat 00:25:07.602 10:55:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.602 10:55:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.602 { 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme$subsystem", 00:25:07.602 "trtype": "$TEST_TRANSPORT", 00:25:07.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "$NVMF_PORT", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.602 "hdgst": ${hdgst:-false}, 00:25:07.602 "ddgst": ${ddgst:-false} 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 } 00:25:07.602 EOF 00:25:07.602 )") 00:25:07.602 10:55:24 -- nvmf/common.sh@542 -- # cat 00:25:07.602 10:55:24 -- nvmf/common.sh@544 -- # jq . 00:25:07.602 10:55:24 -- nvmf/common.sh@545 -- # IFS=, 00:25:07.602 10:55:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme1", 00:25:07.602 "trtype": "tcp", 00:25:07.602 "traddr": "10.0.0.2", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "4420", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:07.602 "hdgst": false, 00:25:07.602 "ddgst": false 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 },{ 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme2", 00:25:07.602 "trtype": "tcp", 00:25:07.602 "traddr": "10.0.0.2", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "4420", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:07.602 "hdgst": false, 00:25:07.602 "ddgst": false 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 },{ 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme3", 00:25:07.602 "trtype": "tcp", 00:25:07.602 "traddr": "10.0.0.2", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "4420", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:07.602 "hdgst": false, 00:25:07.602 "ddgst": false 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 },{ 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme4", 00:25:07.602 "trtype": "tcp", 00:25:07.602 "traddr": "10.0.0.2", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "4420", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:07.602 "hdgst": false, 00:25:07.602 "ddgst": false 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 },{ 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme5", 00:25:07.602 "trtype": "tcp", 00:25:07.602 "traddr": "10.0.0.2", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "4420", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:07.602 "hdgst": false, 00:25:07.602 "ddgst": false 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 },{ 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme6", 00:25:07.602 "trtype": "tcp", 00:25:07.602 "traddr": "10.0.0.2", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "4420", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:07.602 "hdgst": false, 00:25:07.602 "ddgst": false 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 },{ 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme7", 00:25:07.602 "trtype": "tcp", 00:25:07.602 "traddr": "10.0.0.2", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "4420", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:07.602 "hdgst": false, 00:25:07.602 "ddgst": false 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 },{ 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme8", 00:25:07.602 "trtype": "tcp", 00:25:07.602 "traddr": "10.0.0.2", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "4420", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:07.602 "hdgst": false, 00:25:07.602 "ddgst": false 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 },{ 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme9", 00:25:07.602 "trtype": "tcp", 00:25:07.602 "traddr": "10.0.0.2", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "4420", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:07.602 "hdgst": false, 00:25:07.602 "ddgst": false 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 },{ 00:25:07.602 "params": { 00:25:07.602 "name": "Nvme10", 00:25:07.602 "trtype": "tcp", 00:25:07.602 "traddr": "10.0.0.2", 00:25:07.602 "adrfam": "ipv4", 00:25:07.602 "trsvcid": "4420", 00:25:07.602 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:07.602 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:07.602 "hdgst": false, 00:25:07.602 "ddgst": false 00:25:07.602 }, 00:25:07.602 "method": "bdev_nvme_attach_controller" 00:25:07.602 }' 00:25:07.602 [2024-07-10 10:55:24.214188] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:07.602 [2024-07-10 10:55:24.214276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531367 ] 00:25:07.602 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.602 [2024-07-10 10:55:24.276695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.602 [2024-07-10 10:55:24.361846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.499 Running I/O for 10 seconds... 00:25:09.499 10:55:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:09.499 10:55:25 -- common/autotest_common.sh@852 -- # return 0 00:25:09.499 10:55:25 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:09.499 10:55:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.499 10:55:25 -- common/autotest_common.sh@10 -- # set +x 00:25:09.499 10:55:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.499 10:55:26 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:09.499 10:55:26 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:09.499 10:55:26 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:09.499 10:55:26 -- target/shutdown.sh@57 -- # local ret=1 00:25:09.499 10:55:26 -- target/shutdown.sh@58 -- # local i 00:25:09.499 10:55:26 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:09.499 10:55:26 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:09.499 10:55:26 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:09.499 10:55:26 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:09.499 10:55:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.499 10:55:26 -- common/autotest_common.sh@10 -- # set +x 00:25:09.500 10:55:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.500 10:55:26 -- target/shutdown.sh@60 -- # read_io_count=42 00:25:09.500 10:55:26 -- target/shutdown.sh@63 -- # '[' 42 -ge 100 ']' 00:25:09.500 10:55:26 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:09.757 10:55:26 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:09.757 10:55:26 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:09.757 10:55:26 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:09.757 10:55:26 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:09.757 10:55:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.757 10:55:26 -- common/autotest_common.sh@10 -- # set +x 00:25:09.757 10:55:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.757 10:55:26 -- target/shutdown.sh@60 -- # read_io_count=123 00:25:09.757 10:55:26 -- target/shutdown.sh@63 -- # '[' 123 -ge 100 ']' 00:25:09.757 10:55:26 -- target/shutdown.sh@64 -- # ret=0 00:25:09.757 10:55:26 -- target/shutdown.sh@65 -- # break 00:25:09.757 10:55:26 -- target/shutdown.sh@69 -- # return 0 00:25:09.757 10:55:26 -- target/shutdown.sh@109 -- # killprocess 3531367 00:25:09.757 10:55:26 -- common/autotest_common.sh@926 -- # '[' -z 3531367 ']' 00:25:09.757 10:55:26 -- common/autotest_common.sh@930 -- # kill -0 3531367 00:25:09.757 10:55:26 -- common/autotest_common.sh@931 -- # uname 00:25:09.757 10:55:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:09.757 10:55:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3531367 00:25:09.757 10:55:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:09.757 10:55:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:09.757 10:55:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3531367' 00:25:09.757 killing process with pid 3531367 00:25:09.757 10:55:26 -- common/autotest_common.sh@945 -- # kill 3531367 00:25:09.757 10:55:26 -- common/autotest_common.sh@950 -- # wait 3531367 00:25:09.757 Received shutdown signal, test time was about 0.632587 seconds 00:25:09.757 00:25:09.757 Latency(us) 00:25:09.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.757 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.757 Verification LBA range: start 0x0 length 0x400 00:25:09.757 Nvme1n1 : 0.58 335.66 20.98 0.00 0.00 181734.63 7864.32 188743.68 00:25:09.757 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.757 Verification LBA range: start 0x0 length 0x400 00:25:09.757 Nvme2n1 : 0.57 416.28 26.02 0.00 0.00 146307.46 3859.34 155344.59 00:25:09.757 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.757 Verification LBA range: start 0x0 length 0x400 00:25:09.757 Nvme3n1 : 0.63 432.53 27.03 0.00 0.00 132273.05 10777.03 105634.32 00:25:09.757 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.757 Verification LBA range: start 0x0 length 0x400 00:25:09.757 Nvme4n1 : 0.63 430.55 26.91 0.00 0.00 130777.07 10437.21 106411.05 00:25:09.757 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.757 Verification LBA range: start 0x0 length 0x400 00:25:09.757 Nvme5n1 : 0.62 367.81 22.99 0.00 0.00 149509.59 12427.57 139810.13 00:25:09.757 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.757 Verification LBA range: start 0x0 length 0x400 00:25:09.757 Nvme6n1 : 0.63 431.17 26.95 0.00 0.00 128054.61 7767.23 107964.49 00:25:09.757 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.757 Verification LBA range: start 0x0 length 0x400 00:25:09.757 Nvme7n1 : 0.56 405.10 25.32 0.00 0.00 139852.72 21359.88 107964.49 00:25:09.757 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.757 Verification LBA range: start 0x0 length 0x400 00:25:09.757 Nvme8n1 : 0.56 404.11 25.26 0.00 0.00 138272.02 22039.51 109517.94 00:25:09.757 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.758 Verification LBA range: start 0x0 length 0x400 00:25:09.758 Nvme9n1 : 0.61 320.37 20.02 0.00 0.00 160539.42 22136.60 163111.82 00:25:09.758 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.758 Verification LBA range: start 0x0 length 0x400 00:25:09.758 Nvme10n1 : 0.58 393.07 24.57 0.00 0.00 138303.10 20388.98 119615.34 00:25:09.758 =================================================================================================================== 00:25:09.758 Total : 3936.66 246.04 0.00 0.00 143036.11 3859.34 188743.68 00:25:10.015 10:55:26 -- target/shutdown.sh@112 -- # sleep 1 00:25:11.057 10:55:27 -- target/shutdown.sh@113 -- # kill -0 3531173 00:25:11.057 10:55:27 -- target/shutdown.sh@115 -- # stoptarget 00:25:11.057 10:55:27 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:11.057 10:55:27 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:11.057 10:55:27 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:11.057 10:55:27 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:11.057 10:55:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:11.057 10:55:27 -- nvmf/common.sh@116 -- # sync 00:25:11.057 10:55:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:11.057 10:55:27 -- nvmf/common.sh@119 -- # set +e 00:25:11.057 10:55:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:11.057 10:55:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:11.057 rmmod nvme_tcp 00:25:11.057 rmmod nvme_fabrics 00:25:11.057 rmmod nvme_keyring 00:25:11.057 10:55:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:11.057 10:55:27 -- nvmf/common.sh@123 -- # set -e 00:25:11.057 10:55:27 -- nvmf/common.sh@124 -- # return 0 00:25:11.057 10:55:27 -- nvmf/common.sh@477 -- # '[' -n 3531173 ']' 00:25:11.057 10:55:27 -- nvmf/common.sh@478 -- # killprocess 3531173 00:25:11.057 10:55:27 -- common/autotest_common.sh@926 -- # '[' -z 3531173 ']' 00:25:11.057 10:55:27 -- common/autotest_common.sh@930 -- # kill -0 3531173 00:25:11.057 10:55:27 -- common/autotest_common.sh@931 -- # uname 00:25:11.057 10:55:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:11.057 10:55:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3531173 00:25:11.057 10:55:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:11.057 10:55:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:11.057 10:55:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3531173' 00:25:11.057 killing process with pid 3531173 00:25:11.057 10:55:27 -- common/autotest_common.sh@945 -- # kill 3531173 00:25:11.057 10:55:27 -- common/autotest_common.sh@950 -- # wait 3531173 00:25:11.625 10:55:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:11.625 10:55:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:11.625 10:55:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:11.625 10:55:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:11.625 10:55:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:11.625 10:55:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.625 10:55:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.625 10:55:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.158 10:55:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:14.158 00:25:14.158 real 0m7.939s 00:25:14.158 user 0m24.115s 00:25:14.158 sys 0m1.494s 00:25:14.158 10:55:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:14.158 10:55:30 -- common/autotest_common.sh@10 -- # set +x 00:25:14.158 ************************************ 00:25:14.158 END TEST nvmf_shutdown_tc2 00:25:14.158 ************************************ 00:25:14.158 10:55:30 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:14.158 10:55:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:14.158 10:55:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:14.158 10:55:30 -- common/autotest_common.sh@10 -- # set +x 00:25:14.158 ************************************ 00:25:14.158 START TEST nvmf_shutdown_tc3 00:25:14.158 ************************************ 00:25:14.158 10:55:30 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:25:14.158 10:55:30 -- target/shutdown.sh@120 -- # starttarget 00:25:14.158 10:55:30 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:14.158 10:55:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:14.158 10:55:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.158 10:55:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:14.158 10:55:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:14.158 10:55:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:14.158 10:55:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.158 10:55:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.158 10:55:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.158 10:55:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:14.158 10:55:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:14.158 10:55:30 -- common/autotest_common.sh@10 -- # set +x 00:25:14.158 10:55:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:14.158 10:55:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:14.158 10:55:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:14.158 10:55:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:14.158 10:55:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:14.158 10:55:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:14.158 10:55:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:14.158 10:55:30 -- nvmf/common.sh@294 -- # net_devs=() 00:25:14.158 10:55:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:14.158 10:55:30 -- nvmf/common.sh@295 -- # e810=() 00:25:14.158 10:55:30 -- nvmf/common.sh@295 -- # local -ga e810 00:25:14.158 10:55:30 -- nvmf/common.sh@296 -- # x722=() 00:25:14.158 10:55:30 -- nvmf/common.sh@296 -- # local -ga x722 00:25:14.158 10:55:30 -- nvmf/common.sh@297 -- # mlx=() 00:25:14.158 10:55:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:14.158 10:55:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.158 10:55:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.158 10:55:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.158 10:55:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.158 10:55:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.158 10:55:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.158 10:55:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.158 10:55:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.158 10:55:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.158 10:55:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.158 10:55:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.158 10:55:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:14.158 10:55:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:14.158 10:55:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:14.158 10:55:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:14.158 10:55:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:14.158 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:14.158 10:55:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:14.158 10:55:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:14.158 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:14.158 10:55:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:14.158 10:55:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:14.158 10:55:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.158 10:55:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:14.158 10:55:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.158 10:55:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:14.158 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:14.158 10:55:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.158 10:55:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:14.158 10:55:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.158 10:55:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:14.158 10:55:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.158 10:55:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:14.158 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:14.158 10:55:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.158 10:55:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:14.158 10:55:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:14.158 10:55:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:14.158 10:55:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.158 10:55:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.158 10:55:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.158 10:55:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:14.158 10:55:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.158 10:55:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.158 10:55:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:14.158 10:55:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.158 10:55:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.158 10:55:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:14.158 10:55:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:14.158 10:55:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.158 10:55:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.158 10:55:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.158 10:55:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.158 10:55:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:14.158 10:55:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.158 10:55:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.158 10:55:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.158 10:55:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:14.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:25:14.158 00:25:14.158 --- 10.0.0.2 ping statistics --- 00:25:14.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.158 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:25:14.158 10:55:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:25:14.158 00:25:14.158 --- 10.0.0.1 ping statistics --- 00:25:14.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.158 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:25:14.158 10:55:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.158 10:55:30 -- nvmf/common.sh@410 -- # return 0 00:25:14.158 10:55:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:14.158 10:55:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.158 10:55:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:14.158 10:55:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.158 10:55:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:14.158 10:55:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:14.158 10:55:30 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:14.158 10:55:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:14.158 10:55:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:14.158 10:55:30 -- common/autotest_common.sh@10 -- # set +x 00:25:14.158 10:55:30 -- nvmf/common.sh@469 -- # nvmfpid=3532296 00:25:14.158 10:55:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:14.158 10:55:30 -- nvmf/common.sh@470 -- # waitforlisten 3532296 00:25:14.159 10:55:30 -- common/autotest_common.sh@819 -- # '[' -z 3532296 ']' 00:25:14.159 10:55:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.159 10:55:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:14.159 10:55:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.159 10:55:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:14.159 10:55:30 -- common/autotest_common.sh@10 -- # set +x 00:25:14.159 [2024-07-10 10:55:30.660747] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:14.159 [2024-07-10 10:55:30.660843] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.159 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.159 [2024-07-10 10:55:30.728128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.159 [2024-07-10 10:55:30.810849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:14.159 [2024-07-10 10:55:30.811004] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.159 [2024-07-10 10:55:30.811021] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.159 [2024-07-10 10:55:30.811033] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.159 [2024-07-10 10:55:30.811082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.159 [2024-07-10 10:55:30.811142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.159 [2024-07-10 10:55:30.811208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:14.159 [2024-07-10 10:55:30.811210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.092 10:55:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:15.092 10:55:31 -- common/autotest_common.sh@852 -- # return 0 00:25:15.092 10:55:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:15.092 10:55:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:15.092 10:55:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.092 10:55:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.092 10:55:31 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:15.092 10:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.092 10:55:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.092 [2024-07-10 10:55:31.664102] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.092 10:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.092 10:55:31 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:15.092 10:55:31 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:15.092 10:55:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:15.092 10:55:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.092 10:55:31 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:15.092 10:55:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:15.092 10:55:31 -- target/shutdown.sh@28 -- # cat 00:25:15.092 10:55:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:15.092 10:55:31 -- target/shutdown.sh@28 -- # cat 00:25:15.092 10:55:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:15.092 10:55:31 -- target/shutdown.sh@28 -- # cat 00:25:15.092 10:55:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:15.092 10:55:31 -- target/shutdown.sh@28 -- # cat 00:25:15.092 10:55:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:15.092 10:55:31 -- target/shutdown.sh@28 -- # cat 00:25:15.092 10:55:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:15.092 10:55:31 -- target/shutdown.sh@28 -- # cat 00:25:15.092 10:55:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:15.092 10:55:31 -- target/shutdown.sh@28 -- # cat 00:25:15.092 10:55:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:15.092 10:55:31 -- target/shutdown.sh@28 -- # cat 00:25:15.092 10:55:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:15.092 10:55:31 -- target/shutdown.sh@28 -- # cat 00:25:15.092 10:55:31 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:15.092 10:55:31 -- target/shutdown.sh@28 -- # cat 00:25:15.092 10:55:31 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:15.092 10:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.092 10:55:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.092 Malloc1 00:25:15.092 [2024-07-10 10:55:31.753279] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.092 Malloc2 00:25:15.092 Malloc3 00:25:15.092 Malloc4 00:25:15.350 Malloc5 00:25:15.350 Malloc6 00:25:15.350 Malloc7 00:25:15.350 Malloc8 00:25:15.350 Malloc9 00:25:15.609 Malloc10 00:25:15.609 10:55:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.609 10:55:32 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:15.609 10:55:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:15.609 10:55:32 -- common/autotest_common.sh@10 -- # set +x 00:25:15.609 10:55:32 -- target/shutdown.sh@124 -- # perfpid=3532494 00:25:15.609 10:55:32 -- target/shutdown.sh@125 -- # waitforlisten 3532494 /var/tmp/bdevperf.sock 00:25:15.609 10:55:32 -- common/autotest_common.sh@819 -- # '[' -z 3532494 ']' 00:25:15.609 10:55:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.609 10:55:32 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:15.609 10:55:32 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:15.609 10:55:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:15.609 10:55:32 -- nvmf/common.sh@520 -- # config=() 00:25:15.609 10:55:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.609 10:55:32 -- nvmf/common.sh@520 -- # local subsystem config 00:25:15.609 10:55:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:15.609 10:55:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.609 10:55:32 -- common/autotest_common.sh@10 -- # set +x 00:25:15.609 10:55:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.609 { 00:25:15.609 "params": { 00:25:15.609 "name": "Nvme$subsystem", 00:25:15.609 "trtype": "$TEST_TRANSPORT", 00:25:15.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.609 "adrfam": "ipv4", 00:25:15.609 "trsvcid": "$NVMF_PORT", 00:25:15.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.609 "hdgst": ${hdgst:-false}, 00:25:15.609 "ddgst": ${ddgst:-false} 00:25:15.609 }, 00:25:15.609 "method": "bdev_nvme_attach_controller" 00:25:15.609 } 00:25:15.609 EOF 00:25:15.609 )") 00:25:15.609 10:55:32 -- nvmf/common.sh@542 -- # cat 00:25:15.609 10:55:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.609 10:55:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.609 { 00:25:15.609 "params": { 00:25:15.609 "name": "Nvme$subsystem", 00:25:15.609 "trtype": "$TEST_TRANSPORT", 00:25:15.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.609 "adrfam": "ipv4", 00:25:15.609 "trsvcid": "$NVMF_PORT", 00:25:15.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.609 "hdgst": ${hdgst:-false}, 00:25:15.609 "ddgst": ${ddgst:-false} 00:25:15.609 }, 00:25:15.609 "method": "bdev_nvme_attach_controller" 00:25:15.609 } 00:25:15.609 EOF 00:25:15.609 )") 00:25:15.609 10:55:32 -- nvmf/common.sh@542 -- # cat 00:25:15.609 10:55:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.609 10:55:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.609 { 00:25:15.609 "params": { 00:25:15.609 "name": "Nvme$subsystem", 00:25:15.609 "trtype": "$TEST_TRANSPORT", 00:25:15.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.609 "adrfam": "ipv4", 00:25:15.609 "trsvcid": "$NVMF_PORT", 00:25:15.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.609 "hdgst": ${hdgst:-false}, 00:25:15.609 "ddgst": ${ddgst:-false} 00:25:15.609 }, 00:25:15.609 "method": "bdev_nvme_attach_controller" 00:25:15.609 } 00:25:15.609 EOF 00:25:15.609 )") 00:25:15.609 10:55:32 -- nvmf/common.sh@542 -- # cat 00:25:15.609 10:55:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.609 10:55:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.609 { 00:25:15.609 "params": { 00:25:15.609 "name": "Nvme$subsystem", 00:25:15.609 "trtype": "$TEST_TRANSPORT", 00:25:15.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.609 "adrfam": "ipv4", 00:25:15.609 "trsvcid": "$NVMF_PORT", 00:25:15.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.609 "hdgst": ${hdgst:-false}, 00:25:15.609 "ddgst": ${ddgst:-false} 00:25:15.609 }, 00:25:15.609 "method": "bdev_nvme_attach_controller" 00:25:15.609 } 00:25:15.609 EOF 00:25:15.609 )") 00:25:15.609 10:55:32 -- nvmf/common.sh@542 -- # cat 00:25:15.609 10:55:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.609 10:55:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.609 { 00:25:15.609 "params": { 00:25:15.609 "name": "Nvme$subsystem", 00:25:15.609 "trtype": "$TEST_TRANSPORT", 00:25:15.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.609 "adrfam": "ipv4", 00:25:15.609 "trsvcid": "$NVMF_PORT", 00:25:15.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.609 "hdgst": ${hdgst:-false}, 00:25:15.609 "ddgst": ${ddgst:-false} 00:25:15.609 }, 00:25:15.609 "method": "bdev_nvme_attach_controller" 00:25:15.609 } 00:25:15.609 EOF 00:25:15.609 )") 00:25:15.609 10:55:32 -- nvmf/common.sh@542 -- # cat 00:25:15.609 10:55:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.609 10:55:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.609 { 00:25:15.609 "params": { 00:25:15.609 "name": "Nvme$subsystem", 00:25:15.609 "trtype": "$TEST_TRANSPORT", 00:25:15.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.609 "adrfam": "ipv4", 00:25:15.609 "trsvcid": "$NVMF_PORT", 00:25:15.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.609 "hdgst": ${hdgst:-false}, 00:25:15.609 "ddgst": ${ddgst:-false} 00:25:15.609 }, 00:25:15.609 "method": "bdev_nvme_attach_controller" 00:25:15.610 } 00:25:15.610 EOF 00:25:15.610 )") 00:25:15.610 10:55:32 -- nvmf/common.sh@542 -- # cat 00:25:15.610 10:55:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.610 10:55:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.610 { 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme$subsystem", 00:25:15.610 "trtype": "$TEST_TRANSPORT", 00:25:15.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "$NVMF_PORT", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.610 "hdgst": ${hdgst:-false}, 00:25:15.610 "ddgst": ${ddgst:-false} 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 } 00:25:15.610 EOF 00:25:15.610 )") 00:25:15.610 10:55:32 -- nvmf/common.sh@542 -- # cat 00:25:15.610 10:55:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.610 10:55:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.610 { 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme$subsystem", 00:25:15.610 "trtype": "$TEST_TRANSPORT", 00:25:15.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "$NVMF_PORT", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.610 "hdgst": ${hdgst:-false}, 00:25:15.610 "ddgst": ${ddgst:-false} 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 } 00:25:15.610 EOF 00:25:15.610 )") 00:25:15.610 10:55:32 -- nvmf/common.sh@542 -- # cat 00:25:15.610 10:55:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.610 10:55:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.610 { 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme$subsystem", 00:25:15.610 "trtype": "$TEST_TRANSPORT", 00:25:15.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "$NVMF_PORT", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.610 "hdgst": ${hdgst:-false}, 00:25:15.610 "ddgst": ${ddgst:-false} 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 } 00:25:15.610 EOF 00:25:15.610 )") 00:25:15.610 10:55:32 -- nvmf/common.sh@542 -- # cat 00:25:15.610 10:55:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.610 10:55:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.610 { 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme$subsystem", 00:25:15.610 "trtype": "$TEST_TRANSPORT", 00:25:15.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "$NVMF_PORT", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.610 "hdgst": ${hdgst:-false}, 00:25:15.610 "ddgst": ${ddgst:-false} 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 } 00:25:15.610 EOF 00:25:15.610 )") 00:25:15.610 10:55:32 -- nvmf/common.sh@542 -- # cat 00:25:15.610 10:55:32 -- nvmf/common.sh@544 -- # jq . 00:25:15.610 10:55:32 -- nvmf/common.sh@545 -- # IFS=, 00:25:15.610 10:55:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme1", 00:25:15.610 "trtype": "tcp", 00:25:15.610 "traddr": "10.0.0.2", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "4420", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:15.610 "hdgst": false, 00:25:15.610 "ddgst": false 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 },{ 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme2", 00:25:15.610 "trtype": "tcp", 00:25:15.610 "traddr": "10.0.0.2", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "4420", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:15.610 "hdgst": false, 00:25:15.610 "ddgst": false 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 },{ 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme3", 00:25:15.610 "trtype": "tcp", 00:25:15.610 "traddr": "10.0.0.2", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "4420", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:15.610 "hdgst": false, 00:25:15.610 "ddgst": false 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 },{ 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme4", 00:25:15.610 "trtype": "tcp", 00:25:15.610 "traddr": "10.0.0.2", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "4420", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:15.610 "hdgst": false, 00:25:15.610 "ddgst": false 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 },{ 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme5", 00:25:15.610 "trtype": "tcp", 00:25:15.610 "traddr": "10.0.0.2", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "4420", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:15.610 "hdgst": false, 00:25:15.610 "ddgst": false 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 },{ 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme6", 00:25:15.610 "trtype": "tcp", 00:25:15.610 "traddr": "10.0.0.2", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "4420", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:15.610 "hdgst": false, 00:25:15.610 "ddgst": false 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 },{ 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme7", 00:25:15.610 "trtype": "tcp", 00:25:15.610 "traddr": "10.0.0.2", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "4420", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:15.610 "hdgst": false, 00:25:15.610 "ddgst": false 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 },{ 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme8", 00:25:15.610 "trtype": "tcp", 00:25:15.610 "traddr": "10.0.0.2", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "4420", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:15.610 "hdgst": false, 00:25:15.610 "ddgst": false 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 },{ 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme9", 00:25:15.610 "trtype": "tcp", 00:25:15.610 "traddr": "10.0.0.2", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "4420", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:15.610 "hdgst": false, 00:25:15.610 "ddgst": false 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 },{ 00:25:15.610 "params": { 00:25:15.610 "name": "Nvme10", 00:25:15.610 "trtype": "tcp", 00:25:15.610 "traddr": "10.0.0.2", 00:25:15.610 "adrfam": "ipv4", 00:25:15.610 "trsvcid": "4420", 00:25:15.610 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:15.610 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:15.610 "hdgst": false, 00:25:15.610 "ddgst": false 00:25:15.610 }, 00:25:15.610 "method": "bdev_nvme_attach_controller" 00:25:15.610 }' 00:25:15.610 [2024-07-10 10:55:32.256050] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:15.611 [2024-07-10 10:55:32.256143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532494 ] 00:25:15.611 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.611 [2024-07-10 10:55:32.320847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.611 [2024-07-10 10:55:32.407483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.510 Running I/O for 10 seconds... 00:25:17.510 10:55:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:17.510 10:55:33 -- common/autotest_common.sh@852 -- # return 0 00:25:17.510 10:55:33 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:17.510 10:55:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.510 10:55:33 -- common/autotest_common.sh@10 -- # set +x 00:25:17.510 10:55:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.510 10:55:34 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:17.510 10:55:34 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:17.510 10:55:34 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:17.510 10:55:34 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:17.510 10:55:34 -- target/shutdown.sh@57 -- # local ret=1 00:25:17.510 10:55:34 -- target/shutdown.sh@58 -- # local i 00:25:17.510 10:55:34 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:17.510 10:55:34 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:17.510 10:55:34 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:17.510 10:55:34 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:17.510 10:55:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.510 10:55:34 -- common/autotest_common.sh@10 -- # set +x 00:25:17.510 10:55:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.510 10:55:34 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:17.510 10:55:34 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:17.510 10:55:34 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:17.510 10:55:34 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:17.510 10:55:34 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:17.510 10:55:34 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:17.510 10:55:34 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:17.510 10:55:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.510 10:55:34 -- common/autotest_common.sh@10 -- # set +x 00:25:17.510 10:55:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.768 10:55:34 -- target/shutdown.sh@60 -- # read_io_count=87 00:25:17.768 10:55:34 -- target/shutdown.sh@63 -- # '[' 87 -ge 100 ']' 00:25:17.768 10:55:34 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:18.043 10:55:34 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:18.043 10:55:34 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:18.043 10:55:34 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:18.043 10:55:34 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:18.043 10:55:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.043 10:55:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.043 10:55:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.043 10:55:34 -- target/shutdown.sh@60 -- # read_io_count=213 00:25:18.043 10:55:34 -- target/shutdown.sh@63 -- # '[' 213 -ge 100 ']' 00:25:18.043 10:55:34 -- target/shutdown.sh@64 -- # ret=0 00:25:18.043 10:55:34 -- target/shutdown.sh@65 -- # break 00:25:18.043 10:55:34 -- target/shutdown.sh@69 -- # return 0 00:25:18.043 10:55:34 -- target/shutdown.sh@134 -- # killprocess 3532296 00:25:18.043 10:55:34 -- common/autotest_common.sh@926 -- # '[' -z 3532296 ']' 00:25:18.043 10:55:34 -- common/autotest_common.sh@930 -- # kill -0 3532296 00:25:18.043 10:55:34 -- common/autotest_common.sh@931 -- # uname 00:25:18.043 10:55:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:18.043 10:55:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3532296 00:25:18.043 10:55:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:18.043 10:55:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:18.043 10:55:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3532296' 00:25:18.043 killing process with pid 3532296 00:25:18.043 10:55:34 -- common/autotest_common.sh@945 -- # kill 3532296 00:25:18.043 10:55:34 -- common/autotest_common.sh@950 -- # wait 3532296 00:25:18.043 [2024-07-10 10:55:34.666159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.043 [2024-07-10 10:55:34.666765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.666777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.666789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.666801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.666813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.666835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.666851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.666864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.666876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.666888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa800c0 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 10:55:34.668873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 he state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with t[2024-07-10 10:55:34.668953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37504 len:12he state(5) to be set 00:25:18.044 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.668979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.668991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.668999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.669004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.669014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37760 len:1[2024-07-10 10:55:34.669016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 he state(5) to be set 00:25:18.044 [2024-07-10 10:55:34.669030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with t[2024-07-10 10:55:34.669030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:18.044 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.669045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with t[2024-07-10 10:55:34.669049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37888 len:1he state(5) to be set 00:25:18.044 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.044 [2024-07-10 10:55:34.669065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with t[2024-07-10 10:55:34.669066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:18.044 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.044 [2024-07-10 10:55:34.669081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 10:55:34.669131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 he state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with t[2024-07-10 10:55:34.669212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38528 len:12he state(5) to be set 00:25:18.045 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 10:55:34.669325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 he state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32640 len:12[2024-07-10 10:55:34.669403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 he state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 10:55:34.669417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 he state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33024 len:12[2024-07-10 10:55:34.669517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 he state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with t[2024-07-10 10:55:34.669533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:18.045 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 10:55:34.669599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 he state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with t[2024-07-10 10:55:34.669615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33920 len:12he state(5) to be set 00:25:18.045 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 10:55:34.669666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 he state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75e50 is same with the state(5) to be set 00:25:18.045 [2024-07-10 10:55:34.669684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.045 [2024-07-10 10:55:34.669777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.045 [2024-07-10 10:55:34.669791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.669807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.669820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.669835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.669849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.669864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.669878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.669893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.669906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.669921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.669935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.669951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.669965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.669979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.669996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.670012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.670026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.670041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.670055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.670070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.670083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.670099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.670112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.670127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.670141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.670156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.670169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.670184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.046 [2024-07-10 10:55:34.670198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.670295] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25b3aa0 was disconnected and freed. reset controller. 00:25:18.046 [2024-07-10 10:55:34.671031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.046 [2024-07-10 10:55:34.671057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.671074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.046 [2024-07-10 10:55:34.671088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.671102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.046 [2024-07-10 10:55:34.671115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.671128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.046 [2024-07-10 10:55:34.671141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.671154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5a30 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.046 [2024-07-10 10:55:34.671230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.671245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.046 [2024-07-10 10:55:34.671258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.671272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.046 [2024-07-10 10:55:34.671285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.671299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.046 [2024-07-10 10:55:34.671312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.046 [2024-07-10 10:55:34.671324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x253b530 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.046 [2024-07-10 10:55:34.671965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.671977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.671989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.672246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa80570 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.673035] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.047 [2024-07-10 10:55:34.673076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5a30 (9): Bad file descriptor 00:25:18.047 [2024-07-10 10:55:34.674408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.674987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.675007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.675020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.675033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.675046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.675059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.675072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.675084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with t[2024-07-10 10:55:34.675077] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:18.047 he state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.675100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.675113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.675126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.047 [2024-07-10 10:55:34.675138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7321c0 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.675731] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:18.048 [2024-07-10 10:55:34.677009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677810] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:18.048 [2024-07-10 10:55:34.677821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.677893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732650 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.678796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.678839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.678889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.678911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.678934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.678955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.678976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.678997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.679019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.679060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.048 [2024-07-10 10:55:34.679091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.679992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.680015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.680035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.680055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.680076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.680101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.680125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.680147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.680168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.680215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.680239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.680253] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:18.049 [2024-07-10 10:55:34.680261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732b00 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694900 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250fd80 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x253b530 (9): Bad file descriptor 00:25:18.049 [2024-07-10 10:55:34.681694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.049 [2024-07-10 10:55:34.681802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.049 [2024-07-10 10:55:34.681816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2511c80 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.049 [2024-07-10 10:55:34.681868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.050 [2024-07-10 10:55:34.681877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with t[2024-07-10 10:55:34.681888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.050 [2024-07-10 10:55:34.681904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.050 [2024-07-10 10:55:34.681919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.050 [2024-07-10 10:55:34.681933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.050 he state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.681947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.050 [2024-07-10 10:55:34.681953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.681961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.050 [2024-07-10 10:55:34.681969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.681975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.050 [2024-07-10 10:55:34.681983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.681988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de410 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.681996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.682638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98e630 is same with the state(5) to be set 00:25:18.050 [2024-07-10 10:55:34.683419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.050 [2024-07-10 10:55:34.683453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.050 [2024-07-10 10:55:34.683489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.050 [2024-07-10 10:55:34.683506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.050 [2024-07-10 10:55:34.683522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.050 [2024-07-10 10:55:34.683537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.050 [2024-07-10 10:55:34.683553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.050 [2024-07-10 10:55:34.683567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.050 [2024-07-10 10:55:34.683583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.050 [2024-07-10 10:55:34.683596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.050 [2024-07-10 10:55:34.683612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.050 [2024-07-10 10:55:34.683626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.050 [2024-07-10 10:55:34.683642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.050 [2024-07-10 10:55:34.683662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.050 [2024-07-10 10:55:34.683678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.050 [2024-07-10 10:55:34.683692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.683708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.683730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.683746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.683759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.683775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.683789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.683804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.683818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.683834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.683848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.683864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.683878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.683894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.683908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.683923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.683937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.683953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.683966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.683983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.683996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39296 len:12[2024-07-10 10:55:34.684347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98eae0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 he state(5) to be set 00:25:18.051 [2024-07-10 10:55:34.684373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.684965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.051 [2024-07-10 10:55:34.684987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.051 [2024-07-10 10:55:34.684993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.051 [2024-07-10 10:55:34.685001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.051 [2024-07-10 10:55:34.685009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.052 [2024-07-10 10:55:34.685022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.052 [2024-07-10 10:55:34.685036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.052 [2024-07-10 10:55:34.685049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.052 [2024-07-10 10:55:34.685062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with t[2024-07-10 10:55:34.685077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38656 len:12he state(5) to be set 00:25:18.052 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.052 [2024-07-10 10:55:34.685091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.052 [2024-07-10 10:55:34.685105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.052 [2024-07-10 10:55:34.685118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.052 [2024-07-10 10:55:34.685131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38912 len:12[2024-07-10 10:55:34.685144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.052 he state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-10 10:55:34.685160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.052 he state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.052 [2024-07-10 10:55:34.685189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.052 [2024-07-10 10:55:34.685201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b52f0 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685281] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25b52f0 was disconnected and freed. reset controller. 00:25:18.052 [2024-07-10 10:55:34.685288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.052 [2024-07-10 10:55:34.685698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.685710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.685722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.685746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.685762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.685775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.685788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.685800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.685813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.685825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.685837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.685850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98ef70 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with t[2024-07-10 10:55:34.686926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controllehe state(5) to be set 00:25:18.053 r 00:25:18.053 [2024-07-10 10:55:34.686950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2694900 (9): Bad file descriptor 00:25:18.053 [2024-07-10 10:55:34.686975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.686988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.687420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f400 is same with the state(5) to be set 00:25:18.053 [2024-07-10 10:55:34.688434] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:18.053 [2024-07-10 10:55:34.688546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.053 [2024-07-10 10:55:34.688569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.053 [2024-07-10 10:55:34.688590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.053 [2024-07-10 10:55:34.688605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.053 [2024-07-10 10:55:34.688621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.053 [2024-07-10 10:55:34.688634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.053 [2024-07-10 10:55:34.688651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.053 [2024-07-10 10:55:34.688671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.053 [2024-07-10 10:55:34.688688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.053 [2024-07-10 10:55:34.688702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.053 [2024-07-10 10:55:34.688728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.053 [2024-07-10 10:55:34.688742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.053 [2024-07-10 10:55:34.688758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.053 [2024-07-10 10:55:34.688772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.688787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.688800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.688816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.688829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.688846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.688860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.688876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.688889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.688905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.688919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.688935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.688949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.688965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.688978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.688994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.689008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.689023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.689036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.689070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.689085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.689101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.689113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.689128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.689142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.689157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.689170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.689186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.689199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.689214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.689228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.689243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.689255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.689271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.054 [2024-07-10 10:55:34.689284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.689359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b7eb0 is same with the state(5) to be set 00:25:18.054 [2024-07-10 10:55:34.689451] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25b7eb0 was disconnected and freed. reset controller. 00:25:18.054 [2024-07-10 10:55:34.689829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:18.054 [2024-07-10 10:55:34.689913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.689931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5a30 is same with the state(5) to be set 00:25:18.054 [2024-07-10 10:55:34.691049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:18.054 [2024-07-10 10:55:34.691109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2698110 (9): Bad file descriptor 00:25:18.054 [2024-07-10 10:55:34.691145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2694900 (104): Connection reset by peer 00:25:18.054 [2024-07-10 10:55:34.691188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5a30 (9): Bad file descriptor 00:25:18.054 [2024-07-10 10:55:34.691318] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:18.054 [2024-07-10 10:55:34.691431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2694900 (9): Bad file descriptor 00:25:18.054 [2024-07-10 10:55:34.691468] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.054 [2024-07-10 10:55:34.691488] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.054 [2024-07-10 10:55:34.691504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.054 [2024-07-10 10:55:34.691549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.054 [2024-07-10 10:55:34.691570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.691586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.054 [2024-07-10 10:55:34.691599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.691613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.054 [2024-07-10 10:55:34.691626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.691640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.054 [2024-07-10 10:55:34.691654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.691668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26add90 is same with the state(5) to be set 00:25:18.054 [2024-07-10 10:55:34.691716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.054 [2024-07-10 10:55:34.691744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.691759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.054 [2024-07-10 10:55:34.691773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.691787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.054 [2024-07-10 10:55:34.691800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.691814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.054 [2024-07-10 10:55:34.691827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.691841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ad960 is same with the state(5) to be set 00:25:18.054 [2024-07-10 10:55:34.691870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250fd80 (9): Bad file descriptor 00:25:18.054 [2024-07-10 10:55:34.691910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2511c80 (9): Bad file descriptor 00:25:18.054 [2024-07-10 10:55:34.691939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24de410 (9): Bad file descriptor 00:25:18.054 [2024-07-10 10:55:34.691987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.054 [2024-07-10 10:55:34.692009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.692029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.054 [2024-07-10 10:55:34.692044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.054 [2024-07-10 10:55:34.692058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.054 [2024-07-10 10:55:34.692071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.692085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.055 [2024-07-10 10:55:34.692099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.692112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25194f0 is same with the state(5) to be set 00:25:18.055 [2024-07-10 10:55:34.692532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.055 [2024-07-10 10:55:34.692684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.055 [2024-07-10 10:55:34.692823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.055 [2024-07-10 10:55:34.692848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2698110 with addr=10.0.0.2, port=4420 00:25:18.055 [2024-07-10 10:55:34.692864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698110 is same with the state(5) to be set 00:25:18.055 [2024-07-10 10:55:34.692879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:18.055 [2024-07-10 10:55:34.692892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:18.055 [2024-07-10 10:55:34.692906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:18.055 [2024-07-10 10:55:34.693007] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:18.055 [2024-07-10 10:55:34.693071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.693974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.693987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.694003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.694016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.694032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.694052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.694068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.694082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.694097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.694112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.694128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.694142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.694158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.694171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.694187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.694201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.694216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.055 [2024-07-10 10:55:34.694230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.055 [2024-07-10 10:55:34.694246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.694977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.694992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.695006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.695022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.695035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.695050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.056 [2024-07-10 10:55:34.695064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.056 [2024-07-10 10:55:34.695079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bbf70 is same with the state(5) to be set 00:25:18.056 [2024-07-10 10:55:34.696706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.056 [2024-07-10 10:55:34.696742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:18.056 [2024-07-10 10:55:34.696785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2698110 (9): Bad file descriptor 00:25:18.056 [2024-07-10 10:55:34.697071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.056 [2024-07-10 10:55:34.697202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.056 [2024-07-10 10:55:34.697228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x253b530 with addr=10.0.0.2, port=4420 00:25:18.056 [2024-07-10 10:55:34.697245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x253b530 is same with the state(5) to be set 00:25:18.056 [2024-07-10 10:55:34.697261] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:18.056 [2024-07-10 10:55:34.697274] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:18.056 [2024-07-10 10:55:34.697293] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:18.056 [2024-07-10 10:55:34.697629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.056 [2024-07-10 10:55:34.697655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.056 [2024-07-10 10:55:34.697682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x253b530 (9): Bad file descriptor 00:25:18.056 [2024-07-10 10:55:34.697958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.057 [2024-07-10 10:55:34.698121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.057 [2024-07-10 10:55:34.698146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5a30 with addr=10.0.0.2, port=4420 00:25:18.057 [2024-07-10 10:55:34.698163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5a30 is same with the state(5) to be set 00:25:18.057 [2024-07-10 10:55:34.698178] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:18.057 [2024-07-10 10:55:34.698190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:18.057 [2024-07-10 10:55:34.698203] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:18.057 [2024-07-10 10:55:34.698268] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.057 [2024-07-10 10:55:34.698291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5a30 (9): Bad file descriptor 00:25:18.057 [2024-07-10 10:55:34.698360] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.057 [2024-07-10 10:55:34.698380] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.057 [2024-07-10 10:55:34.698394] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.057 [2024-07-10 10:55:34.698456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:18.057 [2024-07-10 10:55:34.698483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.057 [2024-07-10 10:55:34.698670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.057 [2024-07-10 10:55:34.698811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.057 [2024-07-10 10:55:34.698835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2694900 with addr=10.0.0.2, port=4420 00:25:18.057 [2024-07-10 10:55:34.698851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694900 is same with the state(5) to be set 00:25:18.057 [2024-07-10 10:55:34.698906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2694900 (9): Bad file descriptor 00:25:18.057 [2024-07-10 10:55:34.698961] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:18.057 [2024-07-10 10:55:34.698977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:18.057 [2024-07-10 10:55:34.698991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:18.057 [2024-07-10 10:55:34.699044] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.057 [2024-07-10 10:55:34.701458] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26add90 (9): Bad file descriptor 00:25:18.057 [2024-07-10 10:55:34.701510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ad960 (9): Bad file descriptor 00:25:18.057 [2024-07-10 10:55:34.701564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25194f0 (9): Bad file descriptor 00:25:18.057 [2024-07-10 10:55:34.701708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.701744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.701772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.701788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.701805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.701819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.701835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.701849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.701866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.701880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.701897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.701911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.701927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.701940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.701956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.701970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.701986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.057 [2024-07-10 10:55:34.702663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.057 [2024-07-10 10:55:34.702678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.702692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.702707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.702727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.702743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.702757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.702773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.702787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.702803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.702817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.702833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.702847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.702863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.702877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.702893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.702907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.702926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.702940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.702956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.702970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.702985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.702999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.703715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.703732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x267b650 is same with the state(5) to be set 00:25:18.058 [2024-07-10 10:55:34.713042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.713103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.713133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.713148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.713165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.713180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.713195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.713209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.713224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.713239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.713255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.713269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.713284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.713298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.058 [2024-07-10 10:55:34.713313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.058 [2024-07-10 10:55:34.713327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.713981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.713995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.059 [2024-07-10 10:55:34.714655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.059 [2024-07-10 10:55:34.714668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.714685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.714699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.714715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.714737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.714752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.714766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.714781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.714795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.714810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.714824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.714839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.714852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.714868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.714882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.714898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.714911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.714927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.714941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.714957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.714971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.714990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.715004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.715020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.715035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.715051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.715065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.715080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x267cb00 is same with the state(5) to be set 00:25:18.060 [2024-07-10 10:55:34.716290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.716968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.716991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.717007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.060 [2024-07-10 10:55:34.717021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.060 [2024-07-10 10:55:34.717037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.717980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.717995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.718009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.718025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.718038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.718054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.718068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.718083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.718097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.718113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.718127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.718147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.718162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.718178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.718192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.718208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.718222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.718238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.718251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.718267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.061 [2024-07-10 10:55:34.718281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.061 [2024-07-10 10:55:34.718296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x267d6b0 is same with the state(5) to be set 00:25:18.061 [2024-07-10 10:55:34.719530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:18.062 [2024-07-10 10:55:34.719561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:18.062 [2024-07-10 10:55:34.719580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:18.062 [2024-07-10 10:55:34.719598] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:18.062 [2024-07-10 10:55:34.720047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.062 [2024-07-10 10:55:34.720183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.062 [2024-07-10 10:55:34.720210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2698110 with addr=10.0.0.2, port=4420 00:25:18.062 [2024-07-10 10:55:34.720227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2698110 is same with the state(5) to be set 00:25:18.062 [2024-07-10 10:55:34.720448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.062 [2024-07-10 10:55:34.720581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.062 [2024-07-10 10:55:34.720605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2511c80 with addr=10.0.0.2, port=4420 00:25:18.062 [2024-07-10 10:55:34.720621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2511c80 is same with the state(5) to be set 00:25:18.062 [2024-07-10 10:55:34.720741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.062 [2024-07-10 10:55:34.720864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.062 [2024-07-10 10:55:34.720887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x250fd80 with addr=10.0.0.2, port=4420 00:25:18.062 [2024-07-10 10:55:34.720902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250fd80 is same with the state(5) to be set 00:25:18.062 [2024-07-10 10:55:34.721009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.062 [2024-07-10 10:55:34.721146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.062 [2024-07-10 10:55:34.721169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24de410 with addr=10.0.0.2, port=4420 00:25:18.062 [2024-07-10 10:55:34.721190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24de410 is same with the state(5) to be set 00:25:18.062 [2024-07-10 10:55:34.722015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.722980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.722994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.723010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.723025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.723040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.723054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.062 [2024-07-10 10:55:34.723070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.062 [2024-07-10 10:55:34.723084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.723975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.723991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b68d0 is same with the state(5) to be set 00:25:18.063 [2024-07-10 10:55:34.725234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.725258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.725279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.725294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.725311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.725325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.725341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.725355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.725371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.725385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.725401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.725415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.725439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.725455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.725478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.725492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.725508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.725522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.725538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.725552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.725573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.063 [2024-07-10 10:55:34.725587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.063 [2024-07-10 10:55:34.725603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.725982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.725998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.064 [2024-07-10 10:55:34.726746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.064 [2024-07-10 10:55:34.726761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.726775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.726791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.726805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.726820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.726834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.726850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.726864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.726880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.726894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.726910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.726923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.726939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.726953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.726969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.726983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.726999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.727013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.727029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.727042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.727058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.727072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.727091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.727105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.727122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.727135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.727151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.727164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.727180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.727194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.727209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b9490 is same with the state(5) to be set 00:25:18.065 [2024-07-10 10:55:34.728400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.065 [2024-07-10 10:55:34.728945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.065 [2024-07-10 10:55:34.728959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.728975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.728989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.729977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.729990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.730006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.730020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.730035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.730049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.730066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.066 [2024-07-10 10:55:34.730080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.066 [2024-07-10 10:55:34.730096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.067 [2024-07-10 10:55:34.730110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.067 [2024-07-10 10:55:34.730125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.067 [2024-07-10 10:55:34.730139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.067 [2024-07-10 10:55:34.730155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.067 [2024-07-10 10:55:34.730169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.067 [2024-07-10 10:55:34.730188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.067 [2024-07-10 10:55:34.730202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.067 [2024-07-10 10:55:34.730217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.067 [2024-07-10 10:55:34.730231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.067 [2024-07-10 10:55:34.730247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.067 [2024-07-10 10:55:34.730261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.067 [2024-07-10 10:55:34.730277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.067 [2024-07-10 10:55:34.730290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.067 [2024-07-10 10:55:34.730306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.067 [2024-07-10 10:55:34.730320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.067 [2024-07-10 10:55:34.730336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.067 [2024-07-10 10:55:34.730350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.067 [2024-07-10 10:55:34.730364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25baa80 is same with the state(5) to be set 00:25:18.067 [2024-07-10 10:55:34.732576] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:18.067 [2024-07-10 10:55:34.732608] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.067 [2024-07-10 10:55:34.732631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:18.067 [2024-07-10 10:55:34.732650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:18.067 [2024-07-10 10:55:34.732666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:18.067 task offset: 34944 on job bdev=Nvme1n1 fails 00:25:18.067 00:25:18.067 Latency(us) 00:25:18.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.067 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:18.067 Job: Nvme1n1 ended in about 0.73 seconds with error 00:25:18.067 Verification LBA range: start 0x0 length 0x400 00:25:18.067 Nvme1n1 : 0.73 351.35 21.96 88.18 0.00 144606.63 4102.07 170102.33 00:25:18.067 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:18.067 Job: Nvme2n1 ended in about 0.77 seconds with error 00:25:18.067 Verification LBA range: start 0x0 length 0x400 00:25:18.067 Nvme2n1 : 0.77 327.66 20.48 83.55 0.00 153104.42 87381.33 146023.92 00:25:18.067 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:18.067 Job: Nvme3n1 ended in about 0.77 seconds with error 00:25:18.067 Verification LBA range: start 0x0 length 0x400 00:25:18.067 Nvme3n1 : 0.77 326.24 20.39 83.19 0.00 152249.41 86992.97 139033.41 00:25:18.067 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:18.067 Job: Nvme4n1 ended in about 0.77 seconds with error 00:25:18.067 Verification LBA range: start 0x0 length 0x400 00:25:18.067 Nvme4n1 : 0.77 324.89 20.31 82.84 0.00 151382.69 78837.38 136703.24 00:25:18.067 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:18.067 Job: Nvme5n1 ended in about 0.74 seconds with error 00:25:18.067 Verification LBA range: start 0x0 length 0x400 00:25:18.067 Nvme5n1 : 0.74 351.40 21.96 74.34 0.00 143165.65 2742.80 115731.72 00:25:18.067 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:18.067 Job: Nvme6n1 ended in about 0.78 seconds with error 00:25:18.067 Verification LBA range: start 0x0 length 0x400 00:25:18.067 Nvme6n1 : 0.78 322.52 20.16 82.24 0.00 149532.73 88934.78 124275.67 00:25:18.067 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:18.067 Job: Nvme7n1 ended in about 0.74 seconds with error 00:25:18.067 Verification LBA range: start 0x0 length 0x400 00:25:18.067 Nvme7n1 : 0.74 391.05 24.44 32.25 0.00 140802.73 2148.12 113401.55 00:25:18.067 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:18.067 Job: Nvme8n1 ended in about 0.78 seconds with error 00:25:18.067 Verification LBA range: start 0x0 length 0x400 00:25:18.067 Nvme8n1 : 0.78 321.20 20.07 81.90 0.00 147221.67 82721.00 119615.34 00:25:18.067 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:18.067 Job: Nvme9n1 ended in about 0.78 seconds with error 00:25:18.067 Verification LBA range: start 0x0 length 0x400 00:25:18.067 Nvme9n1 : 0.78 319.91 19.99 81.57 0.00 146343.85 75342.13 119615.34 00:25:18.067 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:18.067 Job: Nvme10n1 ended in about 0.75 seconds with error 00:25:18.067 Verification LBA range: start 0x0 length 0x400 00:25:18.067 Nvme10n1 : 0.75 277.55 17.35 85.40 0.00 159479.50 98643.82 135149.80 00:25:18.067 =================================================================================================================== 00:25:18.067 Total : 3313.79 207.11 775.45 0.00 148635.79 2148.12 170102.33 00:25:18.067 [2024-07-10 10:55:34.760968] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:18.067 [2024-07-10 10:55:34.761151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2698110 (9): Bad file descriptor 00:25:18.067 [2024-07-10 10:55:34.761193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2511c80 (9): Bad file descriptor 00:25:18.067 [2024-07-10 10:55:34.761212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250fd80 (9): Bad file descriptor 00:25:18.067 [2024-07-10 10:55:34.761230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24de410 (9): Bad file descriptor 00:25:18.067 [2024-07-10 10:55:34.761296] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:18.067 [2024-07-10 10:55:34.761321] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:18.067 [2024-07-10 10:55:34.761341] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:18.067 [2024-07-10 10:55:34.761360] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:18.067 [2024-07-10 10:55:34.761378] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:18.067 [2024-07-10 10:55:34.761518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:18.067 [2024-07-10 10:55:34.761826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.067 [2024-07-10 10:55:34.762000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.067 [2024-07-10 10:55:34.762028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x253b530 with addr=10.0.0.2, port=4420 00:25:18.067 [2024-07-10 10:55:34.762049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x253b530 is same with the state(5) to be set 00:25:18.067 [2024-07-10 10:55:34.762182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.067 [2024-07-10 10:55:34.762322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.067 [2024-07-10 10:55:34.762349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5a30 with addr=10.0.0.2, port=4420 00:25:18.067 [2024-07-10 10:55:34.762366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5a30 is same with the state(5) to be set 00:25:18.067 [2024-07-10 10:55:34.762491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.067 [2024-07-10 10:55:34.762636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.067 [2024-07-10 10:55:34.762661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2694900 with addr=10.0.0.2, port=4420 00:25:18.067 [2024-07-10 10:55:34.762677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2694900 is same with the state(5) to be set 00:25:18.067 [2024-07-10 10:55:34.762795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.067 [2024-07-10 10:55:34.762922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.067 [2024-07-10 10:55:34.762948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26ad960 with addr=10.0.0.2, port=4420 00:25:18.067 [2024-07-10 10:55:34.762964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ad960 is same with the state(5) to be set 00:25:18.067 [2024-07-10 10:55:34.763079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.067 [2024-07-10 10:55:34.763196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.067 [2024-07-10 10:55:34.763222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26add90 with addr=10.0.0.2, port=4420 00:25:18.067 [2024-07-10 10:55:34.763238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26add90 is same with the state(5) to be set 00:25:18.067 [2024-07-10 10:55:34.763254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:18.067 [2024-07-10 10:55:34.763267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:18.067 [2024-07-10 10:55:34.763284] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:18.067 [2024-07-10 10:55:34.763305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:18.067 [2024-07-10 10:55:34.763319] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:18.067 [2024-07-10 10:55:34.763332] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:18.067 [2024-07-10 10:55:34.763348] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:18.068 [2024-07-10 10:55:34.763361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:18.068 [2024-07-10 10:55:34.763374] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:18.068 [2024-07-10 10:55:34.763390] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:18.068 [2024-07-10 10:55:34.763404] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:18.068 [2024-07-10 10:55:34.763416] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:18.068 [2024-07-10 10:55:34.763470] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:18.068 [2024-07-10 10:55:34.763495] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:18.068 [2024-07-10 10:55:34.763518] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:18.068 [2024-07-10 10:55:34.763541] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:18.068 [2024-07-10 10:55:34.764386] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.068 [2024-07-10 10:55:34.764412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.068 [2024-07-10 10:55:34.764432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.068 [2024-07-10 10:55:34.764446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.068 [2024-07-10 10:55:34.764573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.068 [2024-07-10 10:55:34.764696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.068 [2024-07-10 10:55:34.764721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25194f0 with addr=10.0.0.2, port=4420 00:25:18.068 [2024-07-10 10:55:34.764737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25194f0 is same with the state(5) to be set 00:25:18.068 [2024-07-10 10:55:34.764756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x253b530 (9): Bad file descriptor 00:25:18.068 [2024-07-10 10:55:34.764776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5a30 (9): Bad file descriptor 00:25:18.068 [2024-07-10 10:55:34.764794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2694900 (9): Bad file descriptor 00:25:18.068 [2024-07-10 10:55:34.764811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ad960 (9): Bad file descriptor 00:25:18.068 [2024-07-10 10:55:34.764828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26add90 (9): Bad file descriptor 00:25:18.068 [2024-07-10 10:55:34.765179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25194f0 (9): Bad file descriptor 00:25:18.068 [2024-07-10 10:55:34.765207] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:18.068 [2024-07-10 10:55:34.765222] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:18.068 [2024-07-10 10:55:34.765236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:18.068 [2024-07-10 10:55:34.765254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.068 [2024-07-10 10:55:34.765267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.068 [2024-07-10 10:55:34.765281] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.068 [2024-07-10 10:55:34.765298] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:18.068 [2024-07-10 10:55:34.765312] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:18.068 [2024-07-10 10:55:34.765324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:18.068 [2024-07-10 10:55:34.765340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:18.068 [2024-07-10 10:55:34.765354] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:18.068 [2024-07-10 10:55:34.765367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:18.068 [2024-07-10 10:55:34.765383] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:18.068 [2024-07-10 10:55:34.765396] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:18.068 [2024-07-10 10:55:34.765409] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:18.068 [2024-07-10 10:55:34.765489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.068 [2024-07-10 10:55:34.765510] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.068 [2024-07-10 10:55:34.765523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.068 [2024-07-10 10:55:34.765535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.068 [2024-07-10 10:55:34.765547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.068 [2024-07-10 10:55:34.765559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:18.068 [2024-07-10 10:55:34.765572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:18.068 [2024-07-10 10:55:34.765585] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:18.068 [2024-07-10 10:55:34.765626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.635 10:55:35 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:18.635 10:55:35 -- target/shutdown.sh@138 -- # sleep 1 00:25:19.573 10:55:36 -- target/shutdown.sh@141 -- # kill -9 3532494 00:25:19.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (3532494) - No such process 00:25:19.573 10:55:36 -- target/shutdown.sh@141 -- # true 00:25:19.573 10:55:36 -- target/shutdown.sh@143 -- # stoptarget 00:25:19.573 10:55:36 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:19.573 10:55:36 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:19.573 10:55:36 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:19.573 10:55:36 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:19.573 10:55:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:19.573 10:55:36 -- nvmf/common.sh@116 -- # sync 00:25:19.573 10:55:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:19.573 10:55:36 -- nvmf/common.sh@119 -- # set +e 00:25:19.573 10:55:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:19.573 10:55:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:19.573 rmmod nvme_tcp 00:25:19.573 rmmod nvme_fabrics 00:25:19.573 rmmod nvme_keyring 00:25:19.573 10:55:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:19.573 10:55:36 -- nvmf/common.sh@123 -- # set -e 00:25:19.573 10:55:36 -- nvmf/common.sh@124 -- # return 0 00:25:19.573 10:55:36 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:19.573 10:55:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:19.573 10:55:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:19.573 10:55:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:19.573 10:55:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.573 10:55:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:19.573 10:55:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.573 10:55:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.573 10:55:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.106 10:55:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:22.106 00:25:22.106 real 0m7.914s 00:25:22.106 user 0m20.175s 00:25:22.106 sys 0m1.538s 00:25:22.106 10:55:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:22.106 10:55:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.106 ************************************ 00:25:22.106 END TEST nvmf_shutdown_tc3 00:25:22.106 ************************************ 00:25:22.106 10:55:38 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:22.106 00:25:22.106 real 0m28.674s 00:25:22.106 user 1m22.875s 00:25:22.106 sys 0m6.469s 00:25:22.106 10:55:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:22.106 10:55:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.106 ************************************ 00:25:22.106 END TEST nvmf_shutdown 00:25:22.106 ************************************ 00:25:22.106 10:55:38 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:25:22.106 10:55:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:22.106 10:55:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.106 10:55:38 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:25:22.106 10:55:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:22.106 10:55:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.106 10:55:38 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:25:22.106 10:55:38 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:22.106 10:55:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:22.106 10:55:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:22.106 10:55:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.106 ************************************ 00:25:22.106 START TEST nvmf_multicontroller 00:25:22.106 ************************************ 00:25:22.106 10:55:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:22.106 * Looking for test storage... 00:25:22.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:22.106 10:55:38 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.106 10:55:38 -- nvmf/common.sh@7 -- # uname -s 00:25:22.106 10:55:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.106 10:55:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.106 10:55:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.107 10:55:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.107 10:55:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.107 10:55:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.107 10:55:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.107 10:55:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.107 10:55:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.107 10:55:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.107 10:55:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:22.107 10:55:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:22.107 10:55:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.107 10:55:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.107 10:55:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.107 10:55:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.107 10:55:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.107 10:55:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.107 10:55:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.107 10:55:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.107 10:55:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.107 10:55:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.107 10:55:38 -- paths/export.sh@5 -- # export PATH 00:25:22.107 10:55:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.107 10:55:38 -- nvmf/common.sh@46 -- # : 0 00:25:22.107 10:55:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:22.107 10:55:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:22.107 10:55:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:22.107 10:55:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.107 10:55:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.107 10:55:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:22.107 10:55:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:22.107 10:55:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:22.107 10:55:38 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:22.107 10:55:38 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:22.107 10:55:38 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:22.107 10:55:38 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:22.107 10:55:38 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:22.107 10:55:38 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:22.107 10:55:38 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:22.107 10:55:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:22.107 10:55:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.107 10:55:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:22.107 10:55:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:22.107 10:55:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:22.107 10:55:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.107 10:55:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.107 10:55:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.107 10:55:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:22.107 10:55:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:22.107 10:55:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:22.107 10:55:38 -- common/autotest_common.sh@10 -- # set +x 00:25:24.008 10:55:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:24.008 10:55:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:24.008 10:55:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:24.008 10:55:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:24.008 10:55:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:24.008 10:55:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:24.008 10:55:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:24.008 10:55:40 -- nvmf/common.sh@294 -- # net_devs=() 00:25:24.008 10:55:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:24.008 10:55:40 -- nvmf/common.sh@295 -- # e810=() 00:25:24.008 10:55:40 -- nvmf/common.sh@295 -- # local -ga e810 00:25:24.008 10:55:40 -- nvmf/common.sh@296 -- # x722=() 00:25:24.008 10:55:40 -- nvmf/common.sh@296 -- # local -ga x722 00:25:24.008 10:55:40 -- nvmf/common.sh@297 -- # mlx=() 00:25:24.008 10:55:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:24.008 10:55:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.008 10:55:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.008 10:55:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.008 10:55:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.008 10:55:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.008 10:55:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.008 10:55:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.008 10:55:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.008 10:55:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.008 10:55:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.008 10:55:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.008 10:55:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:24.008 10:55:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:24.008 10:55:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:24.008 10:55:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:24.008 10:55:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:24.008 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:24.008 10:55:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:24.008 10:55:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:24.008 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:24.008 10:55:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:24.008 10:55:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:24.008 10:55:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:24.008 10:55:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.009 10:55:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:24.009 10:55:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.009 10:55:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:24.009 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:24.009 10:55:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.009 10:55:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:24.009 10:55:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.009 10:55:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:24.009 10:55:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.009 10:55:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:24.009 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:24.009 10:55:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.009 10:55:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:24.009 10:55:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:24.009 10:55:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:24.009 10:55:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:24.009 10:55:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:24.009 10:55:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.009 10:55:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.009 10:55:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:24.009 10:55:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:24.009 10:55:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:24.009 10:55:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:24.009 10:55:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:24.009 10:55:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:24.009 10:55:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.009 10:55:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:24.009 10:55:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:24.009 10:55:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:24.009 10:55:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.009 10:55:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.009 10:55:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.009 10:55:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:24.009 10:55:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.009 10:55:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.009 10:55:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.009 10:55:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:24.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:25:24.009 00:25:24.009 --- 10.0.0.2 ping statistics --- 00:25:24.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.009 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:25:24.009 10:55:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:25:24.009 00:25:24.009 --- 10.0.0.1 ping statistics --- 00:25:24.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.009 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:25:24.009 10:55:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.009 10:55:40 -- nvmf/common.sh@410 -- # return 0 00:25:24.009 10:55:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:24.009 10:55:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.009 10:55:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:24.009 10:55:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:24.009 10:55:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.009 10:55:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:24.009 10:55:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:24.009 10:55:40 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:24.009 10:55:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:24.009 10:55:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:24.009 10:55:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.009 10:55:40 -- nvmf/common.sh@469 -- # nvmfpid=3535033 00:25:24.009 10:55:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:24.009 10:55:40 -- nvmf/common.sh@470 -- # waitforlisten 3535033 00:25:24.009 10:55:40 -- common/autotest_common.sh@819 -- # '[' -z 3535033 ']' 00:25:24.009 10:55:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.009 10:55:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:24.009 10:55:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.009 10:55:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:24.009 10:55:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.009 [2024-07-10 10:55:40.756537] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:24.009 [2024-07-10 10:55:40.756612] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.009 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.009 [2024-07-10 10:55:40.824379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:24.267 [2024-07-10 10:55:40.908180] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:24.267 [2024-07-10 10:55:40.908317] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.267 [2024-07-10 10:55:40.908333] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.267 [2024-07-10 10:55:40.908345] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.267 [2024-07-10 10:55:40.908400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.267 [2024-07-10 10:55:40.908465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.267 [2024-07-10 10:55:40.908470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.201 10:55:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:25.201 10:55:41 -- common/autotest_common.sh@852 -- # return 0 00:25:25.201 10:55:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:25.201 10:55:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 10:55:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.201 10:55:41 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:25.201 10:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 [2024-07-10 10:55:41.724128] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.201 10:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.201 10:55:41 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:25.201 10:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 Malloc0 00:25:25.201 10:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.201 10:55:41 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:25.201 10:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 10:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.201 10:55:41 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.201 10:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 10:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.201 10:55:41 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.201 10:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 [2024-07-10 10:55:41.787637] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.201 10:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.201 10:55:41 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:25.201 10:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 [2024-07-10 10:55:41.795557] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:25.201 10:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.201 10:55:41 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:25.201 10:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 Malloc1 00:25:25.201 10:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.201 10:55:41 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:25.201 10:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 10:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.201 10:55:41 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:25.201 10:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 10:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.201 10:55:41 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:25.201 10:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 10:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.201 10:55:41 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:25.201 10:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.201 10:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.201 10:55:41 -- host/multicontroller.sh@44 -- # bdevperf_pid=3535198 00:25:25.201 10:55:41 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:25.201 10:55:41 -- host/multicontroller.sh@47 -- # waitforlisten 3535198 /var/tmp/bdevperf.sock 00:25:25.201 10:55:41 -- common/autotest_common.sh@819 -- # '[' -z 3535198 ']' 00:25:25.201 10:55:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.201 10:55:41 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:25.201 10:55:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:25.201 10:55:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.201 10:55:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:25.201 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:25:26.134 10:55:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:26.134 10:55:42 -- common/autotest_common.sh@852 -- # return 0 00:25:26.134 10:55:42 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:26.134 10:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.134 10:55:42 -- common/autotest_common.sh@10 -- # set +x 00:25:26.134 NVMe0n1 00:25:26.135 10:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:26.135 10:55:42 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.135 10:55:42 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:26.135 10:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.135 10:55:42 -- common/autotest_common.sh@10 -- # set +x 00:25:26.135 10:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:26.135 1 00:25:26.135 10:55:42 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:26.135 10:55:42 -- common/autotest_common.sh@640 -- # local es=0 00:25:26.135 10:55:42 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:26.135 10:55:42 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.135 10:55:42 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:26.135 10:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.135 10:55:42 -- common/autotest_common.sh@10 -- # set +x 00:25:26.135 request: 00:25:26.135 { 00:25:26.135 "name": "NVMe0", 00:25:26.135 "trtype": "tcp", 00:25:26.135 "traddr": "10.0.0.2", 00:25:26.135 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:26.135 "hostaddr": "10.0.0.2", 00:25:26.135 "hostsvcid": "60000", 00:25:26.135 "adrfam": "ipv4", 00:25:26.135 "trsvcid": "4420", 00:25:26.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.135 "method": "bdev_nvme_attach_controller", 00:25:26.135 "req_id": 1 00:25:26.135 } 00:25:26.135 Got JSON-RPC error response 00:25:26.135 response: 00:25:26.135 { 00:25:26.135 "code": -114, 00:25:26.135 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:26.135 } 00:25:26.135 10:55:42 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:26.135 10:55:42 -- common/autotest_common.sh@643 -- # es=1 00:25:26.135 10:55:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:26.135 10:55:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:26.135 10:55:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:26.135 10:55:42 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:26.135 10:55:42 -- common/autotest_common.sh@640 -- # local es=0 00:25:26.135 10:55:42 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:26.135 10:55:42 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.135 10:55:42 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:26.135 10:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.135 10:55:42 -- common/autotest_common.sh@10 -- # set +x 00:25:26.135 request: 00:25:26.135 { 00:25:26.135 "name": "NVMe0", 00:25:26.135 "trtype": "tcp", 00:25:26.135 "traddr": "10.0.0.2", 00:25:26.135 "hostaddr": "10.0.0.2", 00:25:26.135 "hostsvcid": "60000", 00:25:26.135 "adrfam": "ipv4", 00:25:26.135 "trsvcid": "4420", 00:25:26.135 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:26.135 "method": "bdev_nvme_attach_controller", 00:25:26.135 "req_id": 1 00:25:26.135 } 00:25:26.135 Got JSON-RPC error response 00:25:26.135 response: 00:25:26.135 { 00:25:26.135 "code": -114, 00:25:26.135 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:26.135 } 00:25:26.135 10:55:42 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:26.135 10:55:42 -- common/autotest_common.sh@643 -- # es=1 00:25:26.135 10:55:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:26.135 10:55:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:26.135 10:55:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:26.135 10:55:42 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:26.135 10:55:42 -- common/autotest_common.sh@640 -- # local es=0 00:25:26.135 10:55:42 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:26.135 10:55:42 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.135 10:55:42 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:26.135 10:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.135 10:55:42 -- common/autotest_common.sh@10 -- # set +x 00:25:26.135 request: 00:25:26.135 { 00:25:26.135 "name": "NVMe0", 00:25:26.135 "trtype": "tcp", 00:25:26.135 "traddr": "10.0.0.2", 00:25:26.135 "hostaddr": "10.0.0.2", 00:25:26.135 "hostsvcid": "60000", 00:25:26.135 "adrfam": "ipv4", 00:25:26.135 "trsvcid": "4420", 00:25:26.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.135 "multipath": "disable", 00:25:26.135 "method": "bdev_nvme_attach_controller", 00:25:26.135 "req_id": 1 00:25:26.135 } 00:25:26.135 Got JSON-RPC error response 00:25:26.135 response: 00:25:26.135 { 00:25:26.135 "code": -114, 00:25:26.135 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:26.135 } 00:25:26.135 10:55:42 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:26.135 10:55:42 -- common/autotest_common.sh@643 -- # es=1 00:25:26.135 10:55:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:26.135 10:55:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:26.135 10:55:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:26.135 10:55:42 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:26.135 10:55:42 -- common/autotest_common.sh@640 -- # local es=0 00:25:26.135 10:55:42 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:26.135 10:55:42 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:26.135 10:55:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:26.135 10:55:42 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:26.135 10:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.135 10:55:42 -- common/autotest_common.sh@10 -- # set +x 00:25:26.135 request: 00:25:26.135 { 00:25:26.135 "name": "NVMe0", 00:25:26.135 "trtype": "tcp", 00:25:26.135 "traddr": "10.0.0.2", 00:25:26.135 "hostaddr": "10.0.0.2", 00:25:26.135 "hostsvcid": "60000", 00:25:26.135 "adrfam": "ipv4", 00:25:26.135 "trsvcid": "4420", 00:25:26.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.135 "multipath": "failover", 00:25:26.135 "method": "bdev_nvme_attach_controller", 00:25:26.135 "req_id": 1 00:25:26.135 } 00:25:26.135 Got JSON-RPC error response 00:25:26.135 response: 00:25:26.135 { 00:25:26.135 "code": -114, 00:25:26.135 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:26.135 } 00:25:26.135 10:55:42 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:26.135 10:55:42 -- common/autotest_common.sh@643 -- # es=1 00:25:26.135 10:55:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:26.135 10:55:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:26.135 10:55:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:26.135 10:55:42 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.135 10:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.135 10:55:42 -- common/autotest_common.sh@10 -- # set +x 00:25:26.392 00:25:26.392 10:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:26.392 10:55:43 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.392 10:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.392 10:55:43 -- common/autotest_common.sh@10 -- # set +x 00:25:26.392 10:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:26.392 10:55:43 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:26.392 10:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.392 10:55:43 -- common/autotest_common.sh@10 -- # set +x 00:25:26.650 00:25:26.650 10:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:26.650 10:55:43 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.650 10:55:43 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:26.650 10:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.650 10:55:43 -- common/autotest_common.sh@10 -- # set +x 00:25:26.650 10:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:26.650 10:55:43 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:26.650 10:55:43 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:27.585 0 00:25:27.585 10:55:44 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:27.585 10:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.585 10:55:44 -- common/autotest_common.sh@10 -- # set +x 00:25:27.585 10:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.585 10:55:44 -- host/multicontroller.sh@100 -- # killprocess 3535198 00:25:27.585 10:55:44 -- common/autotest_common.sh@926 -- # '[' -z 3535198 ']' 00:25:27.585 10:55:44 -- common/autotest_common.sh@930 -- # kill -0 3535198 00:25:27.585 10:55:44 -- common/autotest_common.sh@931 -- # uname 00:25:27.585 10:55:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:27.585 10:55:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3535198 00:25:27.585 10:55:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:27.585 10:55:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:27.585 10:55:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3535198' 00:25:27.585 killing process with pid 3535198 00:25:27.585 10:55:44 -- common/autotest_common.sh@945 -- # kill 3535198 00:25:27.585 10:55:44 -- common/autotest_common.sh@950 -- # wait 3535198 00:25:27.843 10:55:44 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.843 10:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.843 10:55:44 -- common/autotest_common.sh@10 -- # set +x 00:25:27.843 10:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.843 10:55:44 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:27.843 10:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.843 10:55:44 -- common/autotest_common.sh@10 -- # set +x 00:25:27.843 10:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.843 10:55:44 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:27.843 10:55:44 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:27.843 10:55:44 -- common/autotest_common.sh@1597 -- # read -r file 00:25:27.843 10:55:44 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:27.843 10:55:44 -- common/autotest_common.sh@1596 -- # sort -u 00:25:27.843 10:55:44 -- common/autotest_common.sh@1598 -- # cat 00:25:27.843 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:27.843 [2024-07-10 10:55:41.897311] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:27.843 [2024-07-10 10:55:41.897404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3535198 ] 00:25:27.843 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.843 [2024-07-10 10:55:41.956923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.843 [2024-07-10 10:55:42.041545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.844 [2024-07-10 10:55:43.217774] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 395f90e6-e2d5-4981-a067-57efff04bf52 already exists 00:25:27.844 [2024-07-10 10:55:43.217835] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:395f90e6-e2d5-4981-a067-57efff04bf52 alias for bdev NVMe1n1 00:25:27.844 [2024-07-10 10:55:43.217864] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:27.844 Running I/O for 1 seconds... 00:25:27.844 00:25:27.844 Latency(us) 00:25:27.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.844 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:27.844 NVMe0n1 : 1.01 19925.96 77.84 0.00 0.00 6406.47 4199.16 11456.66 00:25:27.844 =================================================================================================================== 00:25:27.844 Total : 19925.96 77.84 0.00 0.00 6406.47 4199.16 11456.66 00:25:27.844 Received shutdown signal, test time was about 1.000000 seconds 00:25:27.844 00:25:27.844 Latency(us) 00:25:27.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.844 =================================================================================================================== 00:25:27.844 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.844 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:27.844 10:55:44 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:27.844 10:55:44 -- common/autotest_common.sh@1597 -- # read -r file 00:25:27.844 10:55:44 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:27.844 10:55:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:27.844 10:55:44 -- nvmf/common.sh@116 -- # sync 00:25:27.844 10:55:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:27.844 10:55:44 -- nvmf/common.sh@119 -- # set +e 00:25:27.844 10:55:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:27.844 10:55:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:27.844 rmmod nvme_tcp 00:25:27.844 rmmod nvme_fabrics 00:25:27.844 rmmod nvme_keyring 00:25:28.101 10:55:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:28.101 10:55:44 -- nvmf/common.sh@123 -- # set -e 00:25:28.101 10:55:44 -- nvmf/common.sh@124 -- # return 0 00:25:28.101 10:55:44 -- nvmf/common.sh@477 -- # '[' -n 3535033 ']' 00:25:28.101 10:55:44 -- nvmf/common.sh@478 -- # killprocess 3535033 00:25:28.101 10:55:44 -- common/autotest_common.sh@926 -- # '[' -z 3535033 ']' 00:25:28.101 10:55:44 -- common/autotest_common.sh@930 -- # kill -0 3535033 00:25:28.101 10:55:44 -- common/autotest_common.sh@931 -- # uname 00:25:28.101 10:55:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:28.101 10:55:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3535033 00:25:28.101 10:55:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:28.101 10:55:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:28.101 10:55:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3535033' 00:25:28.101 killing process with pid 3535033 00:25:28.101 10:55:44 -- common/autotest_common.sh@945 -- # kill 3535033 00:25:28.101 10:55:44 -- common/autotest_common.sh@950 -- # wait 3535033 00:25:28.361 10:55:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:28.361 10:55:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:28.361 10:55:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:28.361 10:55:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.361 10:55:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:28.361 10:55:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.361 10:55:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.361 10:55:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.264 10:55:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:30.264 00:25:30.264 real 0m8.578s 00:25:30.264 user 0m15.960s 00:25:30.264 sys 0m2.370s 00:25:30.264 10:55:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.264 10:55:47 -- common/autotest_common.sh@10 -- # set +x 00:25:30.264 ************************************ 00:25:30.264 END TEST nvmf_multicontroller 00:25:30.264 ************************************ 00:25:30.264 10:55:47 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:30.264 10:55:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:30.265 10:55:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:30.265 10:55:47 -- common/autotest_common.sh@10 -- # set +x 00:25:30.265 ************************************ 00:25:30.265 START TEST nvmf_aer 00:25:30.265 ************************************ 00:25:30.265 10:55:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:30.523 * Looking for test storage... 00:25:30.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.523 10:55:47 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.523 10:55:47 -- nvmf/common.sh@7 -- # uname -s 00:25:30.523 10:55:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.523 10:55:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.523 10:55:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.523 10:55:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.523 10:55:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.523 10:55:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.523 10:55:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.523 10:55:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.523 10:55:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.523 10:55:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.523 10:55:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.523 10:55:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.523 10:55:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.523 10:55:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.523 10:55:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.523 10:55:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.523 10:55:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.523 10:55:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.523 10:55:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.523 10:55:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.523 10:55:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.523 10:55:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.523 10:55:47 -- paths/export.sh@5 -- # export PATH 00:25:30.523 10:55:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.523 10:55:47 -- nvmf/common.sh@46 -- # : 0 00:25:30.523 10:55:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:30.523 10:55:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:30.523 10:55:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:30.523 10:55:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.523 10:55:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.523 10:55:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:30.523 10:55:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:30.523 10:55:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:30.523 10:55:47 -- host/aer.sh@11 -- # nvmftestinit 00:25:30.523 10:55:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:30.523 10:55:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.523 10:55:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:30.523 10:55:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:30.523 10:55:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:30.523 10:55:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.523 10:55:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.523 10:55:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.523 10:55:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:30.523 10:55:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:30.523 10:55:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:30.523 10:55:47 -- common/autotest_common.sh@10 -- # set +x 00:25:32.424 10:55:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:32.424 10:55:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:32.424 10:55:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:32.424 10:55:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:32.424 10:55:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:32.424 10:55:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:32.424 10:55:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:32.424 10:55:49 -- nvmf/common.sh@294 -- # net_devs=() 00:25:32.424 10:55:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:32.424 10:55:49 -- nvmf/common.sh@295 -- # e810=() 00:25:32.424 10:55:49 -- nvmf/common.sh@295 -- # local -ga e810 00:25:32.424 10:55:49 -- nvmf/common.sh@296 -- # x722=() 00:25:32.424 10:55:49 -- nvmf/common.sh@296 -- # local -ga x722 00:25:32.424 10:55:49 -- nvmf/common.sh@297 -- # mlx=() 00:25:32.424 10:55:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:32.424 10:55:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.424 10:55:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.424 10:55:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.424 10:55:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.424 10:55:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.424 10:55:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.424 10:55:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.424 10:55:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.424 10:55:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.424 10:55:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.424 10:55:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.424 10:55:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:32.424 10:55:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:32.424 10:55:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:32.424 10:55:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:32.424 10:55:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:32.424 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:32.424 10:55:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:32.424 10:55:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:32.424 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:32.424 10:55:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:32.424 10:55:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:32.424 10:55:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:32.424 10:55:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.424 10:55:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:32.424 10:55:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.424 10:55:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:32.424 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:32.424 10:55:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.425 10:55:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:32.425 10:55:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.425 10:55:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:32.425 10:55:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.425 10:55:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:32.425 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:32.425 10:55:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.425 10:55:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:32.425 10:55:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:32.425 10:55:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:32.425 10:55:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:32.425 10:55:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:32.425 10:55:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.425 10:55:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.425 10:55:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.425 10:55:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:32.425 10:55:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.425 10:55:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.425 10:55:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:32.425 10:55:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.425 10:55:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.425 10:55:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:32.425 10:55:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:32.425 10:55:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.425 10:55:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.425 10:55:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.425 10:55:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.425 10:55:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:32.425 10:55:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.425 10:55:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.425 10:55:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.425 10:55:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:32.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:25:32.425 00:25:32.425 --- 10.0.0.2 ping statistics --- 00:25:32.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.425 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:25:32.425 10:55:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:25:32.425 00:25:32.425 --- 10.0.0.1 ping statistics --- 00:25:32.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.425 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:25:32.425 10:55:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.425 10:55:49 -- nvmf/common.sh@410 -- # return 0 00:25:32.425 10:55:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:32.425 10:55:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.425 10:55:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:32.425 10:55:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:32.425 10:55:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.425 10:55:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:32.425 10:55:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:32.425 10:55:49 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:32.425 10:55:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:32.425 10:55:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:32.425 10:55:49 -- common/autotest_common.sh@10 -- # set +x 00:25:32.425 10:55:49 -- nvmf/common.sh@469 -- # nvmfpid=3537434 00:25:32.425 10:55:49 -- nvmf/common.sh@470 -- # waitforlisten 3537434 00:25:32.425 10:55:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:32.425 10:55:49 -- common/autotest_common.sh@819 -- # '[' -z 3537434 ']' 00:25:32.425 10:55:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.425 10:55:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:32.425 10:55:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.425 10:55:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:32.425 10:55:49 -- common/autotest_common.sh@10 -- # set +x 00:25:32.684 [2024-07-10 10:55:49.271400] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:32.684 [2024-07-10 10:55:49.271512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.684 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.684 [2024-07-10 10:55:49.338648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:32.684 [2024-07-10 10:55:49.427774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:32.684 [2024-07-10 10:55:49.427936] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.684 [2024-07-10 10:55:49.427954] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.684 [2024-07-10 10:55:49.427980] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.684 [2024-07-10 10:55:49.428040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.684 [2024-07-10 10:55:49.428068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.684 [2024-07-10 10:55:49.428127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:32.684 [2024-07-10 10:55:49.428129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.616 10:55:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:33.616 10:55:50 -- common/autotest_common.sh@852 -- # return 0 00:25:33.616 10:55:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:33.616 10:55:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:33.616 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.616 10:55:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.616 10:55:50 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:33.616 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.616 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.616 [2024-07-10 10:55:50.243045] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.616 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.616 10:55:50 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:33.616 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.616 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.616 Malloc0 00:25:33.616 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.616 10:55:50 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:33.616 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.616 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.616 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.616 10:55:50 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:33.616 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.616 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.616 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.616 10:55:50 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.616 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.616 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.616 [2024-07-10 10:55:50.296342] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.616 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.616 10:55:50 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:33.616 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.616 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.616 [2024-07-10 10:55:50.304107] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:33.616 [ 00:25:33.616 { 00:25:33.616 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:33.616 "subtype": "Discovery", 00:25:33.616 "listen_addresses": [], 00:25:33.616 "allow_any_host": true, 00:25:33.616 "hosts": [] 00:25:33.616 }, 00:25:33.616 { 00:25:33.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.616 "subtype": "NVMe", 00:25:33.616 "listen_addresses": [ 00:25:33.616 { 00:25:33.616 "transport": "TCP", 00:25:33.616 "trtype": "TCP", 00:25:33.616 "adrfam": "IPv4", 00:25:33.616 "traddr": "10.0.0.2", 00:25:33.616 "trsvcid": "4420" 00:25:33.616 } 00:25:33.616 ], 00:25:33.616 "allow_any_host": true, 00:25:33.616 "hosts": [], 00:25:33.616 "serial_number": "SPDK00000000000001", 00:25:33.616 "model_number": "SPDK bdev Controller", 00:25:33.616 "max_namespaces": 2, 00:25:33.616 "min_cntlid": 1, 00:25:33.616 "max_cntlid": 65519, 00:25:33.616 "namespaces": [ 00:25:33.616 { 00:25:33.616 "nsid": 1, 00:25:33.616 "bdev_name": "Malloc0", 00:25:33.616 "name": "Malloc0", 00:25:33.616 "nguid": "361C41F3A5824424BFE9CAB7B81A9246", 00:25:33.616 "uuid": "361c41f3-a582-4424-bfe9-cab7b81a9246" 00:25:33.616 } 00:25:33.616 ] 00:25:33.616 } 00:25:33.616 ] 00:25:33.616 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.616 10:55:50 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:33.616 10:55:50 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:33.616 10:55:50 -- host/aer.sh@33 -- # aerpid=3537592 00:25:33.616 10:55:50 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:33.616 10:55:50 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:33.616 10:55:50 -- common/autotest_common.sh@1244 -- # local i=0 00:25:33.616 10:55:50 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:33.616 10:55:50 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:25:33.616 10:55:50 -- common/autotest_common.sh@1247 -- # i=1 00:25:33.616 10:55:50 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:33.616 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.616 10:55:50 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:33.616 10:55:50 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:25:33.616 10:55:50 -- common/autotest_common.sh@1247 -- # i=2 00:25:33.616 10:55:50 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:33.874 10:55:50 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:33.874 10:55:50 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:33.874 10:55:50 -- common/autotest_common.sh@1255 -- # return 0 00:25:33.874 10:55:50 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:33.874 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.874 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.874 Malloc1 00:25:33.874 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.874 10:55:50 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:33.874 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.874 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.874 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.874 10:55:50 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:33.874 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.874 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.874 Asynchronous Event Request test 00:25:33.874 Attaching to 10.0.0.2 00:25:33.874 Attached to 10.0.0.2 00:25:33.874 Registering asynchronous event callbacks... 00:25:33.874 Starting namespace attribute notice tests for all controllers... 00:25:33.874 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:33.874 aer_cb - Changed Namespace 00:25:33.874 Cleaning up... 00:25:33.874 [ 00:25:33.874 { 00:25:33.874 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:33.874 "subtype": "Discovery", 00:25:33.874 "listen_addresses": [], 00:25:33.874 "allow_any_host": true, 00:25:33.874 "hosts": [] 00:25:33.874 }, 00:25:33.874 { 00:25:33.874 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.874 "subtype": "NVMe", 00:25:33.874 "listen_addresses": [ 00:25:33.874 { 00:25:33.874 "transport": "TCP", 00:25:33.874 "trtype": "TCP", 00:25:33.874 "adrfam": "IPv4", 00:25:33.874 "traddr": "10.0.0.2", 00:25:33.874 "trsvcid": "4420" 00:25:33.874 } 00:25:33.874 ], 00:25:33.874 "allow_any_host": true, 00:25:33.874 "hosts": [], 00:25:33.874 "serial_number": "SPDK00000000000001", 00:25:33.874 "model_number": "SPDK bdev Controller", 00:25:33.874 "max_namespaces": 2, 00:25:33.874 "min_cntlid": 1, 00:25:33.874 "max_cntlid": 65519, 00:25:33.874 "namespaces": [ 00:25:33.874 { 00:25:33.874 "nsid": 1, 00:25:33.874 "bdev_name": "Malloc0", 00:25:33.874 "name": "Malloc0", 00:25:33.874 "nguid": "361C41F3A5824424BFE9CAB7B81A9246", 00:25:33.874 "uuid": "361c41f3-a582-4424-bfe9-cab7b81a9246" 00:25:33.874 }, 00:25:33.874 { 00:25:33.874 "nsid": 2, 00:25:33.874 "bdev_name": "Malloc1", 00:25:33.874 "name": "Malloc1", 00:25:33.874 "nguid": "486FF2F0E31B4CC3BF6E7E6584263F22", 00:25:33.874 "uuid": "486ff2f0-e31b-4cc3-bf6e-7e6584263f22" 00:25:33.874 } 00:25:33.874 ] 00:25:33.874 } 00:25:33.874 ] 00:25:33.874 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.874 10:55:50 -- host/aer.sh@43 -- # wait 3537592 00:25:33.874 10:55:50 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:33.874 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.874 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.874 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.874 10:55:50 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:33.874 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.874 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.874 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.874 10:55:50 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:33.874 10:55:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.874 10:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:33.874 10:55:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.874 10:55:50 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:33.874 10:55:50 -- host/aer.sh@51 -- # nvmftestfini 00:25:33.874 10:55:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:33.874 10:55:50 -- nvmf/common.sh@116 -- # sync 00:25:33.874 10:55:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:33.874 10:55:50 -- nvmf/common.sh@119 -- # set +e 00:25:33.874 10:55:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:33.874 10:55:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:33.874 rmmod nvme_tcp 00:25:33.874 rmmod nvme_fabrics 00:25:34.132 rmmod nvme_keyring 00:25:34.132 10:55:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:34.132 10:55:50 -- nvmf/common.sh@123 -- # set -e 00:25:34.132 10:55:50 -- nvmf/common.sh@124 -- # return 0 00:25:34.132 10:55:50 -- nvmf/common.sh@477 -- # '[' -n 3537434 ']' 00:25:34.132 10:55:50 -- nvmf/common.sh@478 -- # killprocess 3537434 00:25:34.132 10:55:50 -- common/autotest_common.sh@926 -- # '[' -z 3537434 ']' 00:25:34.132 10:55:50 -- common/autotest_common.sh@930 -- # kill -0 3537434 00:25:34.132 10:55:50 -- common/autotest_common.sh@931 -- # uname 00:25:34.132 10:55:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:34.132 10:55:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3537434 00:25:34.132 10:55:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:34.132 10:55:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:34.132 10:55:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3537434' 00:25:34.132 killing process with pid 3537434 00:25:34.132 10:55:50 -- common/autotest_common.sh@945 -- # kill 3537434 00:25:34.132 [2024-07-10 10:55:50.757636] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:34.132 10:55:50 -- common/autotest_common.sh@950 -- # wait 3537434 00:25:34.391 10:55:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:34.391 10:55:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:34.391 10:55:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:34.391 10:55:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.391 10:55:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:34.391 10:55:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.391 10:55:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.391 10:55:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.293 10:55:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:36.293 00:25:36.293 real 0m5.995s 00:25:36.293 user 0m7.059s 00:25:36.293 sys 0m1.901s 00:25:36.293 10:55:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:36.293 10:55:53 -- common/autotest_common.sh@10 -- # set +x 00:25:36.293 ************************************ 00:25:36.293 END TEST nvmf_aer 00:25:36.293 ************************************ 00:25:36.293 10:55:53 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:36.293 10:55:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:36.293 10:55:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:36.293 10:55:53 -- common/autotest_common.sh@10 -- # set +x 00:25:36.293 ************************************ 00:25:36.293 START TEST nvmf_async_init 00:25:36.293 ************************************ 00:25:36.293 10:55:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:36.293 * Looking for test storage... 00:25:36.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.293 10:55:53 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.293 10:55:53 -- nvmf/common.sh@7 -- # uname -s 00:25:36.293 10:55:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.293 10:55:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.293 10:55:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.293 10:55:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.293 10:55:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.293 10:55:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.293 10:55:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.293 10:55:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.293 10:55:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.293 10:55:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.550 10:55:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.550 10:55:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.550 10:55:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.550 10:55:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.550 10:55:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.550 10:55:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.550 10:55:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.550 10:55:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.550 10:55:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.551 10:55:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.551 10:55:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.551 10:55:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.551 10:55:53 -- paths/export.sh@5 -- # export PATH 00:25:36.551 10:55:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.551 10:55:53 -- nvmf/common.sh@46 -- # : 0 00:25:36.551 10:55:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:36.551 10:55:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:36.551 10:55:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:36.551 10:55:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.551 10:55:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.551 10:55:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:36.551 10:55:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:36.551 10:55:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:36.551 10:55:53 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:36.551 10:55:53 -- host/async_init.sh@14 -- # null_block_size=512 00:25:36.551 10:55:53 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:36.551 10:55:53 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:36.551 10:55:53 -- host/async_init.sh@20 -- # uuidgen 00:25:36.551 10:55:53 -- host/async_init.sh@20 -- # tr -d - 00:25:36.551 10:55:53 -- host/async_init.sh@20 -- # nguid=d996417809dc45358a6ee203f89c828f 00:25:36.551 10:55:53 -- host/async_init.sh@22 -- # nvmftestinit 00:25:36.551 10:55:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:36.551 10:55:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.551 10:55:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:36.551 10:55:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:36.551 10:55:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:36.551 10:55:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.551 10:55:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.551 10:55:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.551 10:55:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:36.551 10:55:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:36.551 10:55:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:36.551 10:55:53 -- common/autotest_common.sh@10 -- # set +x 00:25:38.467 10:55:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:38.467 10:55:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:38.467 10:55:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:38.467 10:55:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:38.467 10:55:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:38.467 10:55:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:38.467 10:55:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:38.467 10:55:55 -- nvmf/common.sh@294 -- # net_devs=() 00:25:38.467 10:55:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:38.467 10:55:55 -- nvmf/common.sh@295 -- # e810=() 00:25:38.467 10:55:55 -- nvmf/common.sh@295 -- # local -ga e810 00:25:38.467 10:55:55 -- nvmf/common.sh@296 -- # x722=() 00:25:38.467 10:55:55 -- nvmf/common.sh@296 -- # local -ga x722 00:25:38.467 10:55:55 -- nvmf/common.sh@297 -- # mlx=() 00:25:38.467 10:55:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:38.467 10:55:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.467 10:55:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.467 10:55:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.467 10:55:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.467 10:55:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.467 10:55:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.467 10:55:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.467 10:55:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.467 10:55:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.467 10:55:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.467 10:55:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.467 10:55:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:38.467 10:55:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:38.467 10:55:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:38.467 10:55:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:38.467 10:55:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:38.467 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:38.467 10:55:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:38.467 10:55:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:38.467 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:38.467 10:55:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:38.467 10:55:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:38.467 10:55:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:38.467 10:55:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.467 10:55:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:38.467 10:55:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.467 10:55:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:38.467 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:38.467 10:55:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.468 10:55:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:38.468 10:55:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.468 10:55:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:38.468 10:55:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.468 10:55:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:38.468 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:38.468 10:55:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.468 10:55:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:38.468 10:55:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:38.468 10:55:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:38.468 10:55:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:38.468 10:55:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:38.468 10:55:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.468 10:55:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.468 10:55:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.468 10:55:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:38.468 10:55:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.468 10:55:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.468 10:55:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:38.468 10:55:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.468 10:55:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.468 10:55:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:38.468 10:55:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:38.468 10:55:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.468 10:55:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.468 10:55:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.468 10:55:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.468 10:55:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:38.468 10:55:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.468 10:55:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.468 10:55:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.468 10:55:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:38.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:25:38.468 00:25:38.468 --- 10.0.0.2 ping statistics --- 00:25:38.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.468 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:25:38.468 10:55:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:25:38.468 00:25:38.468 --- 10.0.0.1 ping statistics --- 00:25:38.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.468 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:38.468 10:55:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.468 10:55:55 -- nvmf/common.sh@410 -- # return 0 00:25:38.468 10:55:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:38.468 10:55:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.468 10:55:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:38.468 10:55:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:38.468 10:55:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.468 10:55:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:38.468 10:55:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:38.468 10:55:55 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:38.468 10:55:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:38.468 10:55:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:38.468 10:55:55 -- common/autotest_common.sh@10 -- # set +x 00:25:38.468 10:55:55 -- nvmf/common.sh@469 -- # nvmfpid=3539665 00:25:38.468 10:55:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:38.468 10:55:55 -- nvmf/common.sh@470 -- # waitforlisten 3539665 00:25:38.468 10:55:55 -- common/autotest_common.sh@819 -- # '[' -z 3539665 ']' 00:25:38.468 10:55:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.468 10:55:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:38.468 10:55:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.468 10:55:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:38.468 10:55:55 -- common/autotest_common.sh@10 -- # set +x 00:25:38.468 [2024-07-10 10:55:55.212806] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:38.468 [2024-07-10 10:55:55.212886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.468 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.468 [2024-07-10 10:55:55.285277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.725 [2024-07-10 10:55:55.373259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:38.725 [2024-07-10 10:55:55.373420] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.725 [2024-07-10 10:55:55.373450] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.725 [2024-07-10 10:55:55.373464] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.725 [2024-07-10 10:55:55.373505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.659 10:55:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:39.659 10:55:56 -- common/autotest_common.sh@852 -- # return 0 00:25:39.659 10:55:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:39.659 10:55:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:39.659 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.659 10:55:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.659 10:55:56 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:39.659 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.659 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.659 [2024-07-10 10:55:56.165300] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.659 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.659 10:55:56 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:39.659 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.659 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.659 null0 00:25:39.659 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.659 10:55:56 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:39.659 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.659 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.659 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.659 10:55:56 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:39.659 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.659 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.659 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.659 10:55:56 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d996417809dc45358a6ee203f89c828f 00:25:39.659 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.659 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.659 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.659 10:55:56 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:39.659 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.660 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.660 [2024-07-10 10:55:56.205514] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.660 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.660 10:55:56 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:39.660 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.660 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.660 nvme0n1 00:25:39.660 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.660 10:55:56 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:39.660 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.660 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.660 [ 00:25:39.660 { 00:25:39.660 "name": "nvme0n1", 00:25:39.660 "aliases": [ 00:25:39.660 "d9964178-09dc-4535-8a6e-e203f89c828f" 00:25:39.660 ], 00:25:39.660 "product_name": "NVMe disk", 00:25:39.660 "block_size": 512, 00:25:39.660 "num_blocks": 2097152, 00:25:39.660 "uuid": "d9964178-09dc-4535-8a6e-e203f89c828f", 00:25:39.660 "assigned_rate_limits": { 00:25:39.660 "rw_ios_per_sec": 0, 00:25:39.660 "rw_mbytes_per_sec": 0, 00:25:39.660 "r_mbytes_per_sec": 0, 00:25:39.660 "w_mbytes_per_sec": 0 00:25:39.660 }, 00:25:39.660 "claimed": false, 00:25:39.660 "zoned": false, 00:25:39.660 "supported_io_types": { 00:25:39.660 "read": true, 00:25:39.660 "write": true, 00:25:39.660 "unmap": false, 00:25:39.660 "write_zeroes": true, 00:25:39.660 "flush": true, 00:25:39.660 "reset": true, 00:25:39.660 "compare": true, 00:25:39.660 "compare_and_write": true, 00:25:39.660 "abort": true, 00:25:39.660 "nvme_admin": true, 00:25:39.660 "nvme_io": true 00:25:39.660 }, 00:25:39.660 "driver_specific": { 00:25:39.660 "nvme": [ 00:25:39.660 { 00:25:39.660 "trid": { 00:25:39.660 "trtype": "TCP", 00:25:39.660 "adrfam": "IPv4", 00:25:39.660 "traddr": "10.0.0.2", 00:25:39.660 "trsvcid": "4420", 00:25:39.660 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:39.660 }, 00:25:39.660 "ctrlr_data": { 00:25:39.660 "cntlid": 1, 00:25:39.660 "vendor_id": "0x8086", 00:25:39.660 "model_number": "SPDK bdev Controller", 00:25:39.660 "serial_number": "00000000000000000000", 00:25:39.660 "firmware_revision": "24.01.1", 00:25:39.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:39.660 "oacs": { 00:25:39.660 "security": 0, 00:25:39.660 "format": 0, 00:25:39.660 "firmware": 0, 00:25:39.660 "ns_manage": 0 00:25:39.660 }, 00:25:39.660 "multi_ctrlr": true, 00:25:39.660 "ana_reporting": false 00:25:39.660 }, 00:25:39.660 "vs": { 00:25:39.660 "nvme_version": "1.3" 00:25:39.660 }, 00:25:39.660 "ns_data": { 00:25:39.660 "id": 1, 00:25:39.660 "can_share": true 00:25:39.660 } 00:25:39.660 } 00:25:39.660 ], 00:25:39.660 "mp_policy": "active_passive" 00:25:39.660 } 00:25:39.660 } 00:25:39.660 ] 00:25:39.660 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.660 10:55:56 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:39.660 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.660 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.660 [2024-07-10 10:55:56.454251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:39.660 [2024-07-10 10:55:56.454340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1368480 (9): Bad file descriptor 00:25:39.918 [2024-07-10 10:55:56.586576] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:39.918 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.918 10:55:56 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:39.918 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.918 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.918 [ 00:25:39.918 { 00:25:39.918 "name": "nvme0n1", 00:25:39.918 "aliases": [ 00:25:39.918 "d9964178-09dc-4535-8a6e-e203f89c828f" 00:25:39.918 ], 00:25:39.918 "product_name": "NVMe disk", 00:25:39.918 "block_size": 512, 00:25:39.918 "num_blocks": 2097152, 00:25:39.918 "uuid": "d9964178-09dc-4535-8a6e-e203f89c828f", 00:25:39.918 "assigned_rate_limits": { 00:25:39.918 "rw_ios_per_sec": 0, 00:25:39.918 "rw_mbytes_per_sec": 0, 00:25:39.918 "r_mbytes_per_sec": 0, 00:25:39.918 "w_mbytes_per_sec": 0 00:25:39.918 }, 00:25:39.918 "claimed": false, 00:25:39.918 "zoned": false, 00:25:39.918 "supported_io_types": { 00:25:39.918 "read": true, 00:25:39.918 "write": true, 00:25:39.918 "unmap": false, 00:25:39.918 "write_zeroes": true, 00:25:39.918 "flush": true, 00:25:39.918 "reset": true, 00:25:39.918 "compare": true, 00:25:39.918 "compare_and_write": true, 00:25:39.918 "abort": true, 00:25:39.918 "nvme_admin": true, 00:25:39.918 "nvme_io": true 00:25:39.918 }, 00:25:39.918 "driver_specific": { 00:25:39.918 "nvme": [ 00:25:39.918 { 00:25:39.918 "trid": { 00:25:39.918 "trtype": "TCP", 00:25:39.918 "adrfam": "IPv4", 00:25:39.918 "traddr": "10.0.0.2", 00:25:39.918 "trsvcid": "4420", 00:25:39.918 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:39.918 }, 00:25:39.918 "ctrlr_data": { 00:25:39.918 "cntlid": 2, 00:25:39.918 "vendor_id": "0x8086", 00:25:39.918 "model_number": "SPDK bdev Controller", 00:25:39.919 "serial_number": "00000000000000000000", 00:25:39.919 "firmware_revision": "24.01.1", 00:25:39.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:39.919 "oacs": { 00:25:39.919 "security": 0, 00:25:39.919 "format": 0, 00:25:39.919 "firmware": 0, 00:25:39.919 "ns_manage": 0 00:25:39.919 }, 00:25:39.919 "multi_ctrlr": true, 00:25:39.919 "ana_reporting": false 00:25:39.919 }, 00:25:39.919 "vs": { 00:25:39.919 "nvme_version": "1.3" 00:25:39.919 }, 00:25:39.919 "ns_data": { 00:25:39.919 "id": 1, 00:25:39.919 "can_share": true 00:25:39.919 } 00:25:39.919 } 00:25:39.919 ], 00:25:39.919 "mp_policy": "active_passive" 00:25:39.919 } 00:25:39.919 } 00:25:39.919 ] 00:25:39.919 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.919 10:55:56 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.919 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.919 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.919 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.919 10:55:56 -- host/async_init.sh@53 -- # mktemp 00:25:39.919 10:55:56 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XC1KHXDTh4 00:25:39.919 10:55:56 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:39.919 10:55:56 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XC1KHXDTh4 00:25:39.919 10:55:56 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:39.919 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.919 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.919 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.919 10:55:56 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:39.919 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.919 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.919 [2024-07-10 10:55:56.630857] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:39.919 [2024-07-10 10:55:56.630986] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:39.919 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.919 10:55:56 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XC1KHXDTh4 00:25:39.919 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.919 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.919 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.919 10:55:56 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XC1KHXDTh4 00:25:39.919 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.919 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.919 [2024-07-10 10:55:56.646892] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:39.919 nvme0n1 00:25:39.919 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.919 10:55:56 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:39.919 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.919 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:39.919 [ 00:25:39.919 { 00:25:39.919 "name": "nvme0n1", 00:25:39.919 "aliases": [ 00:25:39.919 "d9964178-09dc-4535-8a6e-e203f89c828f" 00:25:39.919 ], 00:25:39.919 "product_name": "NVMe disk", 00:25:39.919 "block_size": 512, 00:25:39.919 "num_blocks": 2097152, 00:25:39.919 "uuid": "d9964178-09dc-4535-8a6e-e203f89c828f", 00:25:39.919 "assigned_rate_limits": { 00:25:39.919 "rw_ios_per_sec": 0, 00:25:39.919 "rw_mbytes_per_sec": 0, 00:25:39.919 "r_mbytes_per_sec": 0, 00:25:39.919 "w_mbytes_per_sec": 0 00:25:39.919 }, 00:25:39.919 "claimed": false, 00:25:39.919 "zoned": false, 00:25:39.919 "supported_io_types": { 00:25:39.919 "read": true, 00:25:39.919 "write": true, 00:25:39.919 "unmap": false, 00:25:39.919 "write_zeroes": true, 00:25:39.919 "flush": true, 00:25:39.919 "reset": true, 00:25:39.919 "compare": true, 00:25:39.919 "compare_and_write": true, 00:25:39.919 "abort": true, 00:25:39.919 "nvme_admin": true, 00:25:39.919 "nvme_io": true 00:25:39.919 }, 00:25:39.919 "driver_specific": { 00:25:39.919 "nvme": [ 00:25:39.919 { 00:25:39.919 "trid": { 00:25:39.919 "trtype": "TCP", 00:25:39.919 "adrfam": "IPv4", 00:25:39.919 "traddr": "10.0.0.2", 00:25:39.919 "trsvcid": "4421", 00:25:39.919 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:39.919 }, 00:25:39.919 "ctrlr_data": { 00:25:39.919 "cntlid": 3, 00:25:39.919 "vendor_id": "0x8086", 00:25:39.919 "model_number": "SPDK bdev Controller", 00:25:39.919 "serial_number": "00000000000000000000", 00:25:39.919 "firmware_revision": "24.01.1", 00:25:39.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:39.919 "oacs": { 00:25:39.919 "security": 0, 00:25:39.919 "format": 0, 00:25:39.919 "firmware": 0, 00:25:39.919 "ns_manage": 0 00:25:39.919 }, 00:25:39.919 "multi_ctrlr": true, 00:25:39.919 "ana_reporting": false 00:25:39.919 }, 00:25:39.919 "vs": { 00:25:39.919 "nvme_version": "1.3" 00:25:39.919 }, 00:25:39.919 "ns_data": { 00:25:39.919 "id": 1, 00:25:39.919 "can_share": true 00:25:39.919 } 00:25:39.919 } 00:25:39.919 ], 00:25:39.919 "mp_policy": "active_passive" 00:25:39.919 } 00:25:39.919 } 00:25:39.919 ] 00:25:39.919 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.919 10:55:56 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.919 10:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.919 10:55:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.177 10:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.177 10:55:56 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.XC1KHXDTh4 00:25:40.177 10:55:56 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:40.177 10:55:56 -- host/async_init.sh@78 -- # nvmftestfini 00:25:40.177 10:55:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:40.177 10:55:56 -- nvmf/common.sh@116 -- # sync 00:25:40.177 10:55:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:40.177 10:55:56 -- nvmf/common.sh@119 -- # set +e 00:25:40.177 10:55:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:40.177 10:55:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:40.177 rmmod nvme_tcp 00:25:40.177 rmmod nvme_fabrics 00:25:40.177 rmmod nvme_keyring 00:25:40.177 10:55:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:40.177 10:55:56 -- nvmf/common.sh@123 -- # set -e 00:25:40.177 10:55:56 -- nvmf/common.sh@124 -- # return 0 00:25:40.177 10:55:56 -- nvmf/common.sh@477 -- # '[' -n 3539665 ']' 00:25:40.177 10:55:56 -- nvmf/common.sh@478 -- # killprocess 3539665 00:25:40.177 10:55:56 -- common/autotest_common.sh@926 -- # '[' -z 3539665 ']' 00:25:40.177 10:55:56 -- common/autotest_common.sh@930 -- # kill -0 3539665 00:25:40.177 10:55:56 -- common/autotest_common.sh@931 -- # uname 00:25:40.177 10:55:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:40.177 10:55:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3539665 00:25:40.177 10:55:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:40.177 10:55:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:40.177 10:55:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3539665' 00:25:40.177 killing process with pid 3539665 00:25:40.177 10:55:56 -- common/autotest_common.sh@945 -- # kill 3539665 00:25:40.177 10:55:56 -- common/autotest_common.sh@950 -- # wait 3539665 00:25:40.436 10:55:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:40.436 10:55:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:40.436 10:55:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:40.436 10:55:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.436 10:55:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:40.436 10:55:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.436 10:55:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.436 10:55:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.341 10:55:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:42.341 00:25:42.341 real 0m5.988s 00:25:42.341 user 0m2.819s 00:25:42.341 sys 0m1.736s 00:25:42.341 10:55:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.341 10:55:59 -- common/autotest_common.sh@10 -- # set +x 00:25:42.341 ************************************ 00:25:42.341 END TEST nvmf_async_init 00:25:42.341 ************************************ 00:25:42.341 10:55:59 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:42.341 10:55:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:42.341 10:55:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.341 10:55:59 -- common/autotest_common.sh@10 -- # set +x 00:25:42.341 ************************************ 00:25:42.341 START TEST dma 00:25:42.341 ************************************ 00:25:42.341 10:55:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:42.341 * Looking for test storage... 00:25:42.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.341 10:55:59 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.341 10:55:59 -- nvmf/common.sh@7 -- # uname -s 00:25:42.341 10:55:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.341 10:55:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.341 10:55:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.341 10:55:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.341 10:55:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.341 10:55:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.341 10:55:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.341 10:55:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.341 10:55:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.341 10:55:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.341 10:55:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.341 10:55:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.341 10:55:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.341 10:55:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.341 10:55:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.341 10:55:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.341 10:55:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.341 10:55:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.341 10:55:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.341 10:55:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.341 10:55:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.341 10:55:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.341 10:55:59 -- paths/export.sh@5 -- # export PATH 00:25:42.341 10:55:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.341 10:55:59 -- nvmf/common.sh@46 -- # : 0 00:25:42.341 10:55:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:42.341 10:55:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:42.341 10:55:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:42.341 10:55:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.341 10:55:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.341 10:55:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:42.341 10:55:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:42.341 10:55:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:42.341 10:55:59 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:42.341 10:55:59 -- host/dma.sh@13 -- # exit 0 00:25:42.341 00:25:42.341 real 0m0.065s 00:25:42.341 user 0m0.033s 00:25:42.341 sys 0m0.038s 00:25:42.341 10:55:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.341 10:55:59 -- common/autotest_common.sh@10 -- # set +x 00:25:42.341 ************************************ 00:25:42.341 END TEST dma 00:25:42.341 ************************************ 00:25:42.599 10:55:59 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:42.599 10:55:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:42.599 10:55:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.600 10:55:59 -- common/autotest_common.sh@10 -- # set +x 00:25:42.600 ************************************ 00:25:42.600 START TEST nvmf_identify 00:25:42.600 ************************************ 00:25:42.600 10:55:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:42.600 * Looking for test storage... 00:25:42.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.600 10:55:59 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.600 10:55:59 -- nvmf/common.sh@7 -- # uname -s 00:25:42.600 10:55:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.600 10:55:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.600 10:55:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.600 10:55:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.600 10:55:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.600 10:55:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.600 10:55:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.600 10:55:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.600 10:55:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.600 10:55:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.600 10:55:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.600 10:55:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.600 10:55:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.600 10:55:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.600 10:55:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.600 10:55:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.600 10:55:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.600 10:55:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.600 10:55:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.600 10:55:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.600 10:55:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.600 10:55:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.600 10:55:59 -- paths/export.sh@5 -- # export PATH 00:25:42.600 10:55:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.600 10:55:59 -- nvmf/common.sh@46 -- # : 0 00:25:42.600 10:55:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:42.600 10:55:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:42.600 10:55:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:42.600 10:55:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.600 10:55:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.600 10:55:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:42.600 10:55:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:42.600 10:55:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:42.600 10:55:59 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:42.600 10:55:59 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:42.600 10:55:59 -- host/identify.sh@14 -- # nvmftestinit 00:25:42.600 10:55:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:42.600 10:55:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.600 10:55:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:42.600 10:55:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:42.600 10:55:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:42.600 10:55:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.600 10:55:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.600 10:55:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.600 10:55:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:42.600 10:55:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:42.600 10:55:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:42.600 10:55:59 -- common/autotest_common.sh@10 -- # set +x 00:25:44.501 10:56:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:44.501 10:56:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:44.501 10:56:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:44.501 10:56:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:44.501 10:56:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:44.501 10:56:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:44.501 10:56:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:44.501 10:56:01 -- nvmf/common.sh@294 -- # net_devs=() 00:25:44.501 10:56:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:44.501 10:56:01 -- nvmf/common.sh@295 -- # e810=() 00:25:44.501 10:56:01 -- nvmf/common.sh@295 -- # local -ga e810 00:25:44.501 10:56:01 -- nvmf/common.sh@296 -- # x722=() 00:25:44.501 10:56:01 -- nvmf/common.sh@296 -- # local -ga x722 00:25:44.501 10:56:01 -- nvmf/common.sh@297 -- # mlx=() 00:25:44.501 10:56:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:44.501 10:56:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:44.501 10:56:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:44.501 10:56:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:44.501 10:56:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:44.501 10:56:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:44.501 10:56:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:44.501 10:56:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:44.501 10:56:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:44.501 10:56:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:44.501 10:56:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:44.501 10:56:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:44.501 10:56:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:44.501 10:56:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:44.501 10:56:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:44.501 10:56:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:44.501 10:56:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:44.501 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:44.501 10:56:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:44.501 10:56:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:44.501 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:44.501 10:56:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:44.501 10:56:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:44.501 10:56:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:44.502 10:56:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:44.502 10:56:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:44.502 10:56:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.502 10:56:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:44.502 10:56:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.502 10:56:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:44.502 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:44.502 10:56:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.502 10:56:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:44.502 10:56:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.502 10:56:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:44.502 10:56:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.502 10:56:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:44.502 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:44.502 10:56:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.502 10:56:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:44.502 10:56:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:44.502 10:56:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:44.502 10:56:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:44.502 10:56:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:44.502 10:56:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:44.502 10:56:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:44.502 10:56:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:44.502 10:56:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:44.502 10:56:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:44.502 10:56:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:44.502 10:56:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:44.502 10:56:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:44.502 10:56:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:44.502 10:56:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:44.502 10:56:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:44.502 10:56:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:44.502 10:56:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:44.502 10:56:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:44.502 10:56:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:44.502 10:56:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:44.502 10:56:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:44.502 10:56:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:44.502 10:56:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:44.760 10:56:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:44.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:25:44.760 00:25:44.760 --- 10.0.0.2 ping statistics --- 00:25:44.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.760 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:25:44.760 10:56:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:44.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:25:44.760 00:25:44.760 --- 10.0.0.1 ping statistics --- 00:25:44.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.760 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:25:44.760 10:56:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.760 10:56:01 -- nvmf/common.sh@410 -- # return 0 00:25:44.760 10:56:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:44.760 10:56:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:44.760 10:56:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:44.760 10:56:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:44.760 10:56:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:44.760 10:56:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:44.760 10:56:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:44.760 10:56:01 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:44.760 10:56:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:44.760 10:56:01 -- common/autotest_common.sh@10 -- # set +x 00:25:44.760 10:56:01 -- host/identify.sh@19 -- # nvmfpid=3541810 00:25:44.760 10:56:01 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:44.760 10:56:01 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:44.760 10:56:01 -- host/identify.sh@23 -- # waitforlisten 3541810 00:25:44.760 10:56:01 -- common/autotest_common.sh@819 -- # '[' -z 3541810 ']' 00:25:44.760 10:56:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.760 10:56:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:44.760 10:56:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.760 10:56:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:44.760 10:56:01 -- common/autotest_common.sh@10 -- # set +x 00:25:44.760 [2024-07-10 10:56:01.415707] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:44.760 [2024-07-10 10:56:01.415797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.760 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.760 [2024-07-10 10:56:01.480602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:44.760 [2024-07-10 10:56:01.570102] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:44.760 [2024-07-10 10:56:01.570257] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.760 [2024-07-10 10:56:01.570289] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.760 [2024-07-10 10:56:01.570307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.760 [2024-07-10 10:56:01.570368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.760 [2024-07-10 10:56:01.570435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.760 [2024-07-10 10:56:01.570490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:44.760 [2024-07-10 10:56:01.570493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.694 10:56:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:45.694 10:56:02 -- common/autotest_common.sh@852 -- # return 0 00:25:45.694 10:56:02 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.694 10:56:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.694 10:56:02 -- common/autotest_common.sh@10 -- # set +x 00:25:45.694 [2024-07-10 10:56:02.351929] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.694 10:56:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.694 10:56:02 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:45.694 10:56:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:45.694 10:56:02 -- common/autotest_common.sh@10 -- # set +x 00:25:45.694 10:56:02 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:45.694 10:56:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.694 10:56:02 -- common/autotest_common.sh@10 -- # set +x 00:25:45.694 Malloc0 00:25:45.694 10:56:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.694 10:56:02 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:45.694 10:56:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.694 10:56:02 -- common/autotest_common.sh@10 -- # set +x 00:25:45.694 10:56:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.694 10:56:02 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:45.694 10:56:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.694 10:56:02 -- common/autotest_common.sh@10 -- # set +x 00:25:45.694 10:56:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.694 10:56:02 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.694 10:56:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.694 10:56:02 -- common/autotest_common.sh@10 -- # set +x 00:25:45.694 [2024-07-10 10:56:02.422904] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.694 10:56:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.694 10:56:02 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:45.694 10:56:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.694 10:56:02 -- common/autotest_common.sh@10 -- # set +x 00:25:45.694 10:56:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.694 10:56:02 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:45.694 10:56:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.694 10:56:02 -- common/autotest_common.sh@10 -- # set +x 00:25:45.694 [2024-07-10 10:56:02.438682] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:45.694 [ 00:25:45.694 { 00:25:45.694 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:45.694 "subtype": "Discovery", 00:25:45.694 "listen_addresses": [ 00:25:45.694 { 00:25:45.694 "transport": "TCP", 00:25:45.694 "trtype": "TCP", 00:25:45.694 "adrfam": "IPv4", 00:25:45.694 "traddr": "10.0.0.2", 00:25:45.694 "trsvcid": "4420" 00:25:45.694 } 00:25:45.694 ], 00:25:45.694 "allow_any_host": true, 00:25:45.694 "hosts": [] 00:25:45.694 }, 00:25:45.694 { 00:25:45.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.694 "subtype": "NVMe", 00:25:45.694 "listen_addresses": [ 00:25:45.694 { 00:25:45.694 "transport": "TCP", 00:25:45.694 "trtype": "TCP", 00:25:45.694 "adrfam": "IPv4", 00:25:45.694 "traddr": "10.0.0.2", 00:25:45.694 "trsvcid": "4420" 00:25:45.694 } 00:25:45.694 ], 00:25:45.694 "allow_any_host": true, 00:25:45.694 "hosts": [], 00:25:45.694 "serial_number": "SPDK00000000000001", 00:25:45.694 "model_number": "SPDK bdev Controller", 00:25:45.694 "max_namespaces": 32, 00:25:45.694 "min_cntlid": 1, 00:25:45.694 "max_cntlid": 65519, 00:25:45.694 "namespaces": [ 00:25:45.694 { 00:25:45.694 "nsid": 1, 00:25:45.694 "bdev_name": "Malloc0", 00:25:45.694 "name": "Malloc0", 00:25:45.694 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:45.694 "eui64": "ABCDEF0123456789", 00:25:45.694 "uuid": "6de80921-8737-463c-9736-7c9e08d34f3d" 00:25:45.694 } 00:25:45.694 ] 00:25:45.694 } 00:25:45.694 ] 00:25:45.694 10:56:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.694 10:56:02 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:45.694 [2024-07-10 10:56:02.459086] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:45.694 [2024-07-10 10:56:02.459124] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3541969 ] 00:25:45.694 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.694 [2024-07-10 10:56:02.491839] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:45.694 [2024-07-10 10:56:02.491891] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:45.694 [2024-07-10 10:56:02.491900] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:45.694 [2024-07-10 10:56:02.491914] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:45.694 [2024-07-10 10:56:02.491926] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:45.694 [2024-07-10 10:56:02.495470] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:45.694 [2024-07-10 10:56:02.495525] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f48eb0 0 00:25:45.694 [2024-07-10 10:56:02.503441] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:45.694 [2024-07-10 10:56:02.503461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:45.694 [2024-07-10 10:56:02.503469] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:45.694 [2024-07-10 10:56:02.503475] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:45.694 [2024-07-10 10:56:02.503537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.694 [2024-07-10 10:56:02.503549] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.694 [2024-07-10 10:56:02.503557] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f48eb0) 00:25:45.694 [2024-07-10 10:56:02.503574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:45.694 [2024-07-10 10:56:02.503600] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa1f80, cid 0, qid 0 00:25:45.694 [2024-07-10 10:56:02.511440] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.694 [2024-07-10 10:56:02.511457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.694 [2024-07-10 10:56:02.511465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.694 [2024-07-10 10:56:02.511472] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa1f80) on tqpair=0x1f48eb0 00:25:45.694 [2024-07-10 10:56:02.511506] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:45.694 [2024-07-10 10:56:02.511518] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:45.694 [2024-07-10 10:56:02.511527] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:45.694 [2024-07-10 10:56:02.511545] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.694 [2024-07-10 10:56:02.511554] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.694 [2024-07-10 10:56:02.511561] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f48eb0) 00:25:45.694 [2024-07-10 10:56:02.511572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.694 [2024-07-10 10:56:02.511596] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa1f80, cid 0, qid 0 00:25:45.694 [2024-07-10 10:56:02.511766] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.694 [2024-07-10 10:56:02.511778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.694 [2024-07-10 10:56:02.511789] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.694 [2024-07-10 10:56:02.511796] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa1f80) on tqpair=0x1f48eb0 00:25:45.694 [2024-07-10 10:56:02.511807] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:45.694 [2024-07-10 10:56:02.511820] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:45.695 [2024-07-10 10:56:02.511832] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.511840] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.511846] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f48eb0) 00:25:45.695 [2024-07-10 10:56:02.511857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.695 [2024-07-10 10:56:02.511878] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa1f80, cid 0, qid 0 00:25:45.695 [2024-07-10 10:56:02.512005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.695 [2024-07-10 10:56:02.512020] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.695 [2024-07-10 10:56:02.512027] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.512033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa1f80) on tqpair=0x1f48eb0 00:25:45.695 [2024-07-10 10:56:02.512043] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:45.695 [2024-07-10 10:56:02.512057] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:45.695 [2024-07-10 10:56:02.512069] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.512077] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.512083] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f48eb0) 00:25:45.695 [2024-07-10 10:56:02.512094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.695 [2024-07-10 10:56:02.512114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa1f80, cid 0, qid 0 00:25:45.695 [2024-07-10 10:56:02.512231] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.695 [2024-07-10 10:56:02.512246] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.695 [2024-07-10 10:56:02.512253] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.512259] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa1f80) on tqpair=0x1f48eb0 00:25:45.695 [2024-07-10 10:56:02.512269] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:45.695 [2024-07-10 10:56:02.512288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.512304] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.512319] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f48eb0) 00:25:45.695 [2024-07-10 10:56:02.512338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.695 [2024-07-10 10:56:02.512371] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa1f80, cid 0, qid 0 00:25:45.695 [2024-07-10 10:56:02.512535] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.695 [2024-07-10 10:56:02.512555] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.695 [2024-07-10 10:56:02.512563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.512574] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa1f80) on tqpair=0x1f48eb0 00:25:45.695 [2024-07-10 10:56:02.512599] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:45.695 [2024-07-10 10:56:02.512617] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:45.695 [2024-07-10 10:56:02.512639] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:45.695 [2024-07-10 10:56:02.512756] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:45.695 [2024-07-10 10:56:02.512768] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:45.695 [2024-07-10 10:56:02.512782] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.512790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.512797] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f48eb0) 00:25:45.695 [2024-07-10 10:56:02.512808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.695 [2024-07-10 10:56:02.512845] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa1f80, cid 0, qid 0 00:25:45.695 [2024-07-10 10:56:02.513046] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.695 [2024-07-10 10:56:02.513059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.695 [2024-07-10 10:56:02.513066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513072] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa1f80) on tqpair=0x1f48eb0 00:25:45.695 [2024-07-10 10:56:02.513082] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:45.695 [2024-07-10 10:56:02.513098] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513106] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513113] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f48eb0) 00:25:45.695 [2024-07-10 10:56:02.513123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.695 [2024-07-10 10:56:02.513144] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa1f80, cid 0, qid 0 00:25:45.695 [2024-07-10 10:56:02.513262] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.695 [2024-07-10 10:56:02.513277] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.695 [2024-07-10 10:56:02.513284] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513291] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa1f80) on tqpair=0x1f48eb0 00:25:45.695 [2024-07-10 10:56:02.513301] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:45.695 [2024-07-10 10:56:02.513315] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:45.695 [2024-07-10 10:56:02.513337] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:45.695 [2024-07-10 10:56:02.513366] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:45.695 [2024-07-10 10:56:02.513388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513397] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513403] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f48eb0) 00:25:45.695 [2024-07-10 10:56:02.513414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.695 [2024-07-10 10:56:02.513449] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa1f80, cid 0, qid 0 00:25:45.695 [2024-07-10 10:56:02.513650] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.695 [2024-07-10 10:56:02.513666] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.695 [2024-07-10 10:56:02.513673] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513680] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f48eb0): datao=0, datal=4096, cccid=0 00:25:45.695 [2024-07-10 10:56:02.513688] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fa1f80) on tqpair(0x1f48eb0): expected_datao=0, payload_size=4096 00:25:45.695 [2024-07-10 10:56:02.513700] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513709] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.695 [2024-07-10 10:56:02.513742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.695 [2024-07-10 10:56:02.513749] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513756] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa1f80) on tqpair=0x1f48eb0 00:25:45.695 [2024-07-10 10:56:02.513768] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:45.695 [2024-07-10 10:56:02.513777] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:45.695 [2024-07-10 10:56:02.513785] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:45.695 [2024-07-10 10:56:02.513793] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:45.695 [2024-07-10 10:56:02.513801] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:45.695 [2024-07-10 10:56:02.513809] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:45.695 [2024-07-10 10:56:02.513828] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:45.695 [2024-07-10 10:56:02.513842] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513850] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.513856] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f48eb0) 00:25:45.695 [2024-07-10 10:56:02.513867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:45.695 [2024-07-10 10:56:02.513889] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa1f80, cid 0, qid 0 00:25:45.695 [2024-07-10 10:56:02.514024] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.695 [2024-07-10 10:56:02.514039] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.695 [2024-07-10 10:56:02.514046] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.514052] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa1f80) on tqpair=0x1f48eb0 00:25:45.695 [2024-07-10 10:56:02.514065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.514072] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.514079] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f48eb0) 00:25:45.695 [2024-07-10 10:56:02.514089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.695 [2024-07-10 10:56:02.514098] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.514109] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.514116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f48eb0) 00:25:45.695 [2024-07-10 10:56:02.514125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.695 [2024-07-10 10:56:02.514135] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.514142] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.514148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f48eb0) 00:25:45.695 [2024-07-10 10:56:02.514157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.695 [2024-07-10 10:56:02.514166] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.514173] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.695 [2024-07-10 10:56:02.514179] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f48eb0) 00:25:45.695 [2024-07-10 10:56:02.514188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.696 [2024-07-10 10:56:02.514196] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:45.696 [2024-07-10 10:56:02.514215] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:45.696 [2024-07-10 10:56:02.514227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.696 [2024-07-10 10:56:02.514235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.696 [2024-07-10 10:56:02.514241] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f48eb0) 00:25:45.696 [2024-07-10 10:56:02.514265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.696 [2024-07-10 10:56:02.514288] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa1f80, cid 0, qid 0 00:25:45.696 [2024-07-10 10:56:02.514299] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa20e0, cid 1, qid 0 00:25:45.696 [2024-07-10 10:56:02.514306] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa2240, cid 2, qid 0 00:25:45.696 [2024-07-10 10:56:02.514314] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa23a0, cid 3, qid 0 00:25:45.696 [2024-07-10 10:56:02.514336] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa2500, cid 4, qid 0 00:25:45.696 [2024-07-10 10:56:02.514550] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.696 [2024-07-10 10:56:02.514572] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.696 [2024-07-10 10:56:02.514586] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.696 [2024-07-10 10:56:02.514596] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa2500) on tqpair=0x1f48eb0 00:25:45.696 [2024-07-10 10:56:02.514606] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:45.696 [2024-07-10 10:56:02.514615] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:45.696 [2024-07-10 10:56:02.514634] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.696 [2024-07-10 10:56:02.514643] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.696 [2024-07-10 10:56:02.514650] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f48eb0) 00:25:45.696 [2024-07-10 10:56:02.514662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.696 [2024-07-10 10:56:02.514696] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa2500, cid 4, qid 0 00:25:45.696 [2024-07-10 10:56:02.514874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.696 [2024-07-10 10:56:02.514891] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.696 [2024-07-10 10:56:02.514898] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.696 [2024-07-10 10:56:02.514905] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f48eb0): datao=0, datal=4096, cccid=4 00:25:45.696 [2024-07-10 10:56:02.514913] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fa2500) on tqpair(0x1f48eb0): expected_datao=0, payload_size=4096 00:25:45.696 [2024-07-10 10:56:02.514931] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.696 [2024-07-10 10:56:02.514940] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.955 [2024-07-10 10:56:02.559439] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.955 [2024-07-10 10:56:02.559460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.955 [2024-07-10 10:56:02.559468] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.559475] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa2500) on tqpair=0x1f48eb0 00:25:45.956 [2024-07-10 10:56:02.559496] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:45.956 [2024-07-10 10:56:02.559533] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.559544] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.559551] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f48eb0) 00:25:45.956 [2024-07-10 10:56:02.559562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.956 [2024-07-10 10:56:02.559574] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.559581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.559588] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f48eb0) 00:25:45.956 [2024-07-10 10:56:02.559597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.956 [2024-07-10 10:56:02.559626] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa2500, cid 4, qid 0 00:25:45.956 [2024-07-10 10:56:02.559638] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa2660, cid 5, qid 0 00:25:45.956 [2024-07-10 10:56:02.559848] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.956 [2024-07-10 10:56:02.559863] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.956 [2024-07-10 10:56:02.559870] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.559877] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f48eb0): datao=0, datal=1024, cccid=4 00:25:45.956 [2024-07-10 10:56:02.559884] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fa2500) on tqpair(0x1f48eb0): expected_datao=0, payload_size=1024 00:25:45.956 [2024-07-10 10:56:02.559895] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.559903] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.559912] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.956 [2024-07-10 10:56:02.559921] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.956 [2024-07-10 10:56:02.559927] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.559934] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa2660) on tqpair=0x1f48eb0 00:25:45.956 [2024-07-10 10:56:02.600635] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.956 [2024-07-10 10:56:02.600653] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.956 [2024-07-10 10:56:02.600661] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.600672] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa2500) on tqpair=0x1f48eb0 00:25:45.956 [2024-07-10 10:56:02.600697] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.600707] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.600714] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f48eb0) 00:25:45.956 [2024-07-10 10:56:02.600725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.956 [2024-07-10 10:56:02.600755] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa2500, cid 4, qid 0 00:25:45.956 [2024-07-10 10:56:02.600927] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.956 [2024-07-10 10:56:02.600943] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.956 [2024-07-10 10:56:02.600950] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.600957] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f48eb0): datao=0, datal=3072, cccid=4 00:25:45.956 [2024-07-10 10:56:02.600965] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fa2500) on tqpair(0x1f48eb0): expected_datao=0, payload_size=3072 00:25:45.956 [2024-07-10 10:56:02.600976] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.600983] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.601045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.956 [2024-07-10 10:56:02.601056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.956 [2024-07-10 10:56:02.601063] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.601070] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa2500) on tqpair=0x1f48eb0 00:25:45.956 [2024-07-10 10:56:02.601084] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.601093] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.601100] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f48eb0) 00:25:45.956 [2024-07-10 10:56:02.601110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.956 [2024-07-10 10:56:02.601138] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa2500, cid 4, qid 0 00:25:45.956 [2024-07-10 10:56:02.601278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.956 [2024-07-10 10:56:02.601289] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.956 [2024-07-10 10:56:02.601296] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.601303] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f48eb0): datao=0, datal=8, cccid=4 00:25:45.956 [2024-07-10 10:56:02.601310] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fa2500) on tqpair(0x1f48eb0): expected_datao=0, payload_size=8 00:25:45.956 [2024-07-10 10:56:02.601321] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.601328] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.641558] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.956 [2024-07-10 10:56:02.641576] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.956 [2024-07-10 10:56:02.641584] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.956 [2024-07-10 10:56:02.641590] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa2500) on tqpair=0x1f48eb0 00:25:45.956 ===================================================== 00:25:45.956 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:45.956 ===================================================== 00:25:45.956 Controller Capabilities/Features 00:25:45.956 ================================ 00:25:45.956 Vendor ID: 0000 00:25:45.956 Subsystem Vendor ID: 0000 00:25:45.956 Serial Number: .................... 00:25:45.956 Model Number: ........................................ 00:25:45.956 Firmware Version: 24.01.1 00:25:45.956 Recommended Arb Burst: 0 00:25:45.956 IEEE OUI Identifier: 00 00 00 00:25:45.956 Multi-path I/O 00:25:45.956 May have multiple subsystem ports: No 00:25:45.956 May have multiple controllers: No 00:25:45.956 Associated with SR-IOV VF: No 00:25:45.956 Max Data Transfer Size: 131072 00:25:45.956 Max Number of Namespaces: 0 00:25:45.956 Max Number of I/O Queues: 1024 00:25:45.956 NVMe Specification Version (VS): 1.3 00:25:45.956 NVMe Specification Version (Identify): 1.3 00:25:45.956 Maximum Queue Entries: 128 00:25:45.956 Contiguous Queues Required: Yes 00:25:45.956 Arbitration Mechanisms Supported 00:25:45.956 Weighted Round Robin: Not Supported 00:25:45.956 Vendor Specific: Not Supported 00:25:45.956 Reset Timeout: 15000 ms 00:25:45.956 Doorbell Stride: 4 bytes 00:25:45.956 NVM Subsystem Reset: Not Supported 00:25:45.956 Command Sets Supported 00:25:45.956 NVM Command Set: Supported 00:25:45.956 Boot Partition: Not Supported 00:25:45.956 Memory Page Size Minimum: 4096 bytes 00:25:45.956 Memory Page Size Maximum: 4096 bytes 00:25:45.956 Persistent Memory Region: Not Supported 00:25:45.956 Optional Asynchronous Events Supported 00:25:45.956 Namespace Attribute Notices: Not Supported 00:25:45.956 Firmware Activation Notices: Not Supported 00:25:45.956 ANA Change Notices: Not Supported 00:25:45.956 PLE Aggregate Log Change Notices: Not Supported 00:25:45.956 LBA Status Info Alert Notices: Not Supported 00:25:45.956 EGE Aggregate Log Change Notices: Not Supported 00:25:45.956 Normal NVM Subsystem Shutdown event: Not Supported 00:25:45.956 Zone Descriptor Change Notices: Not Supported 00:25:45.956 Discovery Log Change Notices: Supported 00:25:45.956 Controller Attributes 00:25:45.956 128-bit Host Identifier: Not Supported 00:25:45.956 Non-Operational Permissive Mode: Not Supported 00:25:45.956 NVM Sets: Not Supported 00:25:45.956 Read Recovery Levels: Not Supported 00:25:45.956 Endurance Groups: Not Supported 00:25:45.956 Predictable Latency Mode: Not Supported 00:25:45.956 Traffic Based Keep ALive: Not Supported 00:25:45.956 Namespace Granularity: Not Supported 00:25:45.956 SQ Associations: Not Supported 00:25:45.956 UUID List: Not Supported 00:25:45.956 Multi-Domain Subsystem: Not Supported 00:25:45.956 Fixed Capacity Management: Not Supported 00:25:45.956 Variable Capacity Management: Not Supported 00:25:45.956 Delete Endurance Group: Not Supported 00:25:45.956 Delete NVM Set: Not Supported 00:25:45.956 Extended LBA Formats Supported: Not Supported 00:25:45.956 Flexible Data Placement Supported: Not Supported 00:25:45.956 00:25:45.956 Controller Memory Buffer Support 00:25:45.956 ================================ 00:25:45.956 Supported: No 00:25:45.956 00:25:45.956 Persistent Memory Region Support 00:25:45.956 ================================ 00:25:45.956 Supported: No 00:25:45.956 00:25:45.956 Admin Command Set Attributes 00:25:45.956 ============================ 00:25:45.956 Security Send/Receive: Not Supported 00:25:45.956 Format NVM: Not Supported 00:25:45.956 Firmware Activate/Download: Not Supported 00:25:45.956 Namespace Management: Not Supported 00:25:45.957 Device Self-Test: Not Supported 00:25:45.957 Directives: Not Supported 00:25:45.957 NVMe-MI: Not Supported 00:25:45.957 Virtualization Management: Not Supported 00:25:45.957 Doorbell Buffer Config: Not Supported 00:25:45.957 Get LBA Status Capability: Not Supported 00:25:45.957 Command & Feature Lockdown Capability: Not Supported 00:25:45.957 Abort Command Limit: 1 00:25:45.957 Async Event Request Limit: 4 00:25:45.957 Number of Firmware Slots: N/A 00:25:45.957 Firmware Slot 1 Read-Only: N/A 00:25:45.957 Firmware Activation Without Reset: N/A 00:25:45.957 Multiple Update Detection Support: N/A 00:25:45.957 Firmware Update Granularity: No Information Provided 00:25:45.957 Per-Namespace SMART Log: No 00:25:45.957 Asymmetric Namespace Access Log Page: Not Supported 00:25:45.957 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:45.957 Command Effects Log Page: Not Supported 00:25:45.957 Get Log Page Extended Data: Supported 00:25:45.957 Telemetry Log Pages: Not Supported 00:25:45.957 Persistent Event Log Pages: Not Supported 00:25:45.957 Supported Log Pages Log Page: May Support 00:25:45.957 Commands Supported & Effects Log Page: Not Supported 00:25:45.957 Feature Identifiers & Effects Log Page:May Support 00:25:45.957 NVMe-MI Commands & Effects Log Page: May Support 00:25:45.957 Data Area 4 for Telemetry Log: Not Supported 00:25:45.957 Error Log Page Entries Supported: 128 00:25:45.957 Keep Alive: Not Supported 00:25:45.957 00:25:45.957 NVM Command Set Attributes 00:25:45.957 ========================== 00:25:45.957 Submission Queue Entry Size 00:25:45.957 Max: 1 00:25:45.957 Min: 1 00:25:45.957 Completion Queue Entry Size 00:25:45.957 Max: 1 00:25:45.957 Min: 1 00:25:45.957 Number of Namespaces: 0 00:25:45.957 Compare Command: Not Supported 00:25:45.957 Write Uncorrectable Command: Not Supported 00:25:45.957 Dataset Management Command: Not Supported 00:25:45.957 Write Zeroes Command: Not Supported 00:25:45.957 Set Features Save Field: Not Supported 00:25:45.957 Reservations: Not Supported 00:25:45.957 Timestamp: Not Supported 00:25:45.957 Copy: Not Supported 00:25:45.957 Volatile Write Cache: Not Present 00:25:45.957 Atomic Write Unit (Normal): 1 00:25:45.957 Atomic Write Unit (PFail): 1 00:25:45.957 Atomic Compare & Write Unit: 1 00:25:45.957 Fused Compare & Write: Supported 00:25:45.957 Scatter-Gather List 00:25:45.957 SGL Command Set: Supported 00:25:45.957 SGL Keyed: Supported 00:25:45.957 SGL Bit Bucket Descriptor: Not Supported 00:25:45.957 SGL Metadata Pointer: Not Supported 00:25:45.957 Oversized SGL: Not Supported 00:25:45.957 SGL Metadata Address: Not Supported 00:25:45.957 SGL Offset: Supported 00:25:45.957 Transport SGL Data Block: Not Supported 00:25:45.957 Replay Protected Memory Block: Not Supported 00:25:45.957 00:25:45.957 Firmware Slot Information 00:25:45.957 ========================= 00:25:45.957 Active slot: 0 00:25:45.957 00:25:45.957 00:25:45.957 Error Log 00:25:45.957 ========= 00:25:45.957 00:25:45.957 Active Namespaces 00:25:45.957 ================= 00:25:45.957 Discovery Log Page 00:25:45.957 ================== 00:25:45.957 Generation Counter: 2 00:25:45.957 Number of Records: 2 00:25:45.957 Record Format: 0 00:25:45.957 00:25:45.957 Discovery Log Entry 0 00:25:45.957 ---------------------- 00:25:45.957 Transport Type: 3 (TCP) 00:25:45.957 Address Family: 1 (IPv4) 00:25:45.957 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:45.957 Entry Flags: 00:25:45.957 Duplicate Returned Information: 1 00:25:45.957 Explicit Persistent Connection Support for Discovery: 1 00:25:45.957 Transport Requirements: 00:25:45.957 Secure Channel: Not Required 00:25:45.957 Port ID: 0 (0x0000) 00:25:45.957 Controller ID: 65535 (0xffff) 00:25:45.957 Admin Max SQ Size: 128 00:25:45.957 Transport Service Identifier: 4420 00:25:45.957 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:45.957 Transport Address: 10.0.0.2 00:25:45.957 Discovery Log Entry 1 00:25:45.957 ---------------------- 00:25:45.957 Transport Type: 3 (TCP) 00:25:45.957 Address Family: 1 (IPv4) 00:25:45.957 Subsystem Type: 2 (NVM Subsystem) 00:25:45.957 Entry Flags: 00:25:45.957 Duplicate Returned Information: 0 00:25:45.957 Explicit Persistent Connection Support for Discovery: 0 00:25:45.957 Transport Requirements: 00:25:45.957 Secure Channel: Not Required 00:25:45.957 Port ID: 0 (0x0000) 00:25:45.957 Controller ID: 65535 (0xffff) 00:25:45.957 Admin Max SQ Size: 128 00:25:45.957 Transport Service Identifier: 4420 00:25:45.957 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:45.957 Transport Address: 10.0.0.2 [2024-07-10 10:56:02.641710] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:45.957 [2024-07-10 10:56:02.641735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.957 [2024-07-10 10:56:02.641747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.957 [2024-07-10 10:56:02.641760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.957 [2024-07-10 10:56:02.641770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.957 [2024-07-10 10:56:02.641784] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.957 [2024-07-10 10:56:02.641792] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.957 [2024-07-10 10:56:02.641799] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f48eb0) 00:25:45.957 [2024-07-10 10:56:02.641810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.957 [2024-07-10 10:56:02.641850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa23a0, cid 3, qid 0 00:25:45.957 [2024-07-10 10:56:02.642053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.957 [2024-07-10 10:56:02.642069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.957 [2024-07-10 10:56:02.642076] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.957 [2024-07-10 10:56:02.642083] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa23a0) on tqpair=0x1f48eb0 00:25:45.957 [2024-07-10 10:56:02.642097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.957 [2024-07-10 10:56:02.642104] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.957 [2024-07-10 10:56:02.642111] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f48eb0) 00:25:45.957 [2024-07-10 10:56:02.642121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.957 [2024-07-10 10:56:02.642147] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa23a0, cid 3, qid 0 00:25:45.957 [2024-07-10 10:56:02.642304] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.957 [2024-07-10 10:56:02.642315] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.957 [2024-07-10 10:56:02.642322] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.957 [2024-07-10 10:56:02.642329] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa23a0) on tqpair=0x1f48eb0 00:25:45.957 [2024-07-10 10:56:02.642338] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:45.957 [2024-07-10 10:56:02.642347] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:45.957 [2024-07-10 10:56:02.642362] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.957 [2024-07-10 10:56:02.642371] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.957 [2024-07-10 10:56:02.642377] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f48eb0) 00:25:45.957 [2024-07-10 10:56:02.642388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.957 [2024-07-10 10:56:02.642408] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fa23a0, cid 3, qid 0 00:25:45.957 [2024-07-10 10:56:02.646436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.957 [2024-07-10 10:56:02.646453] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.957 [2024-07-10 10:56:02.646460] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.957 [2024-07-10 10:56:02.646466] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1fa23a0) on tqpair=0x1f48eb0 00:25:45.957 [2024-07-10 10:56:02.646482] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:25:45.957 00:25:45.957 10:56:02 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:45.957 [2024-07-10 10:56:02.679651] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:45.958 [2024-07-10 10:56:02.679697] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3541972 ] 00:25:45.958 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.958 [2024-07-10 10:56:02.712277] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:45.958 [2024-07-10 10:56:02.712327] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:45.958 [2024-07-10 10:56:02.712336] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:45.958 [2024-07-10 10:56:02.712350] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:45.958 [2024-07-10 10:56:02.712361] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:45.958 [2024-07-10 10:56:02.715484] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:45.958 [2024-07-10 10:56:02.715541] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf62eb0 0 00:25:45.958 [2024-07-10 10:56:02.723448] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:45.958 [2024-07-10 10:56:02.723467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:45.958 [2024-07-10 10:56:02.723492] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:45.958 [2024-07-10 10:56:02.723498] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:45.958 [2024-07-10 10:56:02.723539] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.723552] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.723559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf62eb0) 00:25:45.958 [2024-07-10 10:56:02.723573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:45.958 [2024-07-10 10:56:02.723600] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbbf80, cid 0, qid 0 00:25:45.958 [2024-07-10 10:56:02.730454] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.958 [2024-07-10 10:56:02.730473] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.958 [2024-07-10 10:56:02.730481] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.730490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbbf80) on tqpair=0xf62eb0 00:25:45.958 [2024-07-10 10:56:02.730504] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:45.958 [2024-07-10 10:56:02.730519] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:45.958 [2024-07-10 10:56:02.730528] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:45.958 [2024-07-10 10:56:02.730544] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.730553] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.730559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf62eb0) 00:25:45.958 [2024-07-10 10:56:02.730571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.958 [2024-07-10 10:56:02.730594] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbbf80, cid 0, qid 0 00:25:45.958 [2024-07-10 10:56:02.730771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.958 [2024-07-10 10:56:02.730788] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.958 [2024-07-10 10:56:02.730800] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.730807] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbbf80) on tqpair=0xf62eb0 00:25:45.958 [2024-07-10 10:56:02.730816] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:45.958 [2024-07-10 10:56:02.730831] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:45.958 [2024-07-10 10:56:02.730847] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.730855] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.730862] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf62eb0) 00:25:45.958 [2024-07-10 10:56:02.730873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.958 [2024-07-10 10:56:02.730895] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbbf80, cid 0, qid 0 00:25:45.958 [2024-07-10 10:56:02.731055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.958 [2024-07-10 10:56:02.731071] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.958 [2024-07-10 10:56:02.731078] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.731085] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbbf80) on tqpair=0xf62eb0 00:25:45.958 [2024-07-10 10:56:02.731094] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:45.958 [2024-07-10 10:56:02.731109] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:45.958 [2024-07-10 10:56:02.731124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.731132] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.731139] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf62eb0) 00:25:45.958 [2024-07-10 10:56:02.731149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.958 [2024-07-10 10:56:02.731171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbbf80, cid 0, qid 0 00:25:45.958 [2024-07-10 10:56:02.731297] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.958 [2024-07-10 10:56:02.731313] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.958 [2024-07-10 10:56:02.731320] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.731327] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbbf80) on tqpair=0xf62eb0 00:25:45.958 [2024-07-10 10:56:02.731336] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:45.958 [2024-07-10 10:56:02.731357] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.731367] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.731374] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf62eb0) 00:25:45.958 [2024-07-10 10:56:02.731384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.958 [2024-07-10 10:56:02.731408] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbbf80, cid 0, qid 0 00:25:45.958 [2024-07-10 10:56:02.731545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.958 [2024-07-10 10:56:02.731562] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.958 [2024-07-10 10:56:02.731569] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.731575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbbf80) on tqpair=0xf62eb0 00:25:45.958 [2024-07-10 10:56:02.731583] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:45.958 [2024-07-10 10:56:02.731596] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:45.958 [2024-07-10 10:56:02.731612] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:45.958 [2024-07-10 10:56:02.731739] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:45.958 [2024-07-10 10:56:02.731747] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:45.958 [2024-07-10 10:56:02.731759] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.731767] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.731773] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf62eb0) 00:25:45.958 [2024-07-10 10:56:02.731783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.958 [2024-07-10 10:56:02.731820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbbf80, cid 0, qid 0 00:25:45.958 [2024-07-10 10:56:02.732028] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.958 [2024-07-10 10:56:02.732045] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.958 [2024-07-10 10:56:02.732052] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.732059] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbbf80) on tqpair=0xf62eb0 00:25:45.958 [2024-07-10 10:56:02.732067] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:45.958 [2024-07-10 10:56:02.732086] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.732097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.732104] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf62eb0) 00:25:45.958 [2024-07-10 10:56:02.732114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.958 [2024-07-10 10:56:02.732136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbbf80, cid 0, qid 0 00:25:45.958 [2024-07-10 10:56:02.732276] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.958 [2024-07-10 10:56:02.732292] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.958 [2024-07-10 10:56:02.732299] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.958 [2024-07-10 10:56:02.732306] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbbf80) on tqpair=0xf62eb0 00:25:45.958 [2024-07-10 10:56:02.732314] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:45.958 [2024-07-10 10:56:02.732322] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:45.958 [2024-07-10 10:56:02.732337] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:45.959 [2024-07-10 10:56:02.732358] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:45.959 [2024-07-10 10:56:02.732372] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.732380] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.732387] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf62eb0) 00:25:45.959 [2024-07-10 10:56:02.732397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.959 [2024-07-10 10:56:02.732444] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbbf80, cid 0, qid 0 00:25:45.959 [2024-07-10 10:56:02.732674] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.959 [2024-07-10 10:56:02.732695] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.959 [2024-07-10 10:56:02.732707] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.732718] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf62eb0): datao=0, datal=4096, cccid=0 00:25:45.959 [2024-07-10 10:56:02.732730] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfbbf80) on tqpair(0xf62eb0): expected_datao=0, payload_size=4096 00:25:45.959 [2024-07-10 10:56:02.732755] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.732766] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.773646] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.959 [2024-07-10 10:56:02.773675] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.959 [2024-07-10 10:56:02.773693] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.773708] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbbf80) on tqpair=0xf62eb0 00:25:45.959 [2024-07-10 10:56:02.773727] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:45.959 [2024-07-10 10:56:02.773742] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:45.959 [2024-07-10 10:56:02.773754] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:45.959 [2024-07-10 10:56:02.773766] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:45.959 [2024-07-10 10:56:02.773777] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:45.959 [2024-07-10 10:56:02.773786] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:45.959 [2024-07-10 10:56:02.773809] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:45.959 [2024-07-10 10:56:02.773824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.773832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.773839] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf62eb0) 00:25:45.959 [2024-07-10 10:56:02.773850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:45.959 [2024-07-10 10:56:02.773875] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbbf80, cid 0, qid 0 00:25:45.959 [2024-07-10 10:56:02.774037] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.959 [2024-07-10 10:56:02.774054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.959 [2024-07-10 10:56:02.774061] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774068] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbbf80) on tqpair=0xf62eb0 00:25:45.959 [2024-07-10 10:56:02.774079] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774087] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774093] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf62eb0) 00:25:45.959 [2024-07-10 10:56:02.774104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.959 [2024-07-10 10:56:02.774114] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774127] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf62eb0) 00:25:45.959 [2024-07-10 10:56:02.774140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.959 [2024-07-10 10:56:02.774152] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774165] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774176] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf62eb0) 00:25:45.959 [2024-07-10 10:56:02.774191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.959 [2024-07-10 10:56:02.774209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774222] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774232] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:45.959 [2024-07-10 10:56:02.774242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.959 [2024-07-10 10:56:02.774251] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:45.959 [2024-07-10 10:56:02.774274] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:45.959 [2024-07-10 10:56:02.774289] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774297] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774303] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf62eb0) 00:25:45.959 [2024-07-10 10:56:02.774314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.959 [2024-07-10 10:56:02.774338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbbf80, cid 0, qid 0 00:25:45.959 [2024-07-10 10:56:02.774350] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc0e0, cid 1, qid 0 00:25:45.959 [2024-07-10 10:56:02.774358] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc240, cid 2, qid 0 00:25:45.959 [2024-07-10 10:56:02.774366] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:45.959 [2024-07-10 10:56:02.774374] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc500, cid 4, qid 0 00:25:45.959 [2024-07-10 10:56:02.774593] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.959 [2024-07-10 10:56:02.774617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.959 [2024-07-10 10:56:02.774628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774635] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc500) on tqpair=0xf62eb0 00:25:45.959 [2024-07-10 10:56:02.774644] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:45.959 [2024-07-10 10:56:02.774653] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:45.959 [2024-07-10 10:56:02.774669] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:45.959 [2024-07-10 10:56:02.774688] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:45.959 [2024-07-10 10:56:02.774700] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774708] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.774714] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf62eb0) 00:25:45.959 [2024-07-10 10:56:02.774725] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:45.959 [2024-07-10 10:56:02.774766] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc500, cid 4, qid 0 00:25:45.959 [2024-07-10 10:56:02.774981] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.959 [2024-07-10 10:56:02.774997] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.959 [2024-07-10 10:56:02.775004] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.775011] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc500) on tqpair=0xf62eb0 00:25:45.959 [2024-07-10 10:56:02.775076] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:45.959 [2024-07-10 10:56:02.775096] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:45.959 [2024-07-10 10:56:02.775127] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.775135] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.959 [2024-07-10 10:56:02.775141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf62eb0) 00:25:45.959 [2024-07-10 10:56:02.775152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.959 [2024-07-10 10:56:02.775173] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc500, cid 4, qid 0 00:25:45.959 [2024-07-10 10:56:02.775383] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.959 [2024-07-10 10:56:02.775406] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.221 [2024-07-10 10:56:02.775422] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.779448] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf62eb0): datao=0, datal=4096, cccid=4 00:25:46.221 [2024-07-10 10:56:02.779458] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfbc500) on tqpair(0xf62eb0): expected_datao=0, payload_size=4096 00:25:46.221 [2024-07-10 10:56:02.779471] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.779479] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.779491] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.221 [2024-07-10 10:56:02.779502] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.221 [2024-07-10 10:56:02.779509] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.779516] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc500) on tqpair=0xf62eb0 00:25:46.221 [2024-07-10 10:56:02.779535] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:46.221 [2024-07-10 10:56:02.779551] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:46.221 [2024-07-10 10:56:02.779571] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:46.221 [2024-07-10 10:56:02.779587] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.779595] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.779602] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf62eb0) 00:25:46.221 [2024-07-10 10:56:02.779612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.221 [2024-07-10 10:56:02.779636] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc500, cid 4, qid 0 00:25:46.221 [2024-07-10 10:56:02.779842] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.221 [2024-07-10 10:56:02.779859] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.221 [2024-07-10 10:56:02.779866] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.779876] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf62eb0): datao=0, datal=4096, cccid=4 00:25:46.221 [2024-07-10 10:56:02.779894] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfbc500) on tqpair(0xf62eb0): expected_datao=0, payload_size=4096 00:25:46.221 [2024-07-10 10:56:02.779913] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.779926] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.779945] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.221 [2024-07-10 10:56:02.779959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.221 [2024-07-10 10:56:02.779966] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.779973] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc500) on tqpair=0xf62eb0 00:25:46.221 [2024-07-10 10:56:02.779996] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:46.221 [2024-07-10 10:56:02.780017] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:46.221 [2024-07-10 10:56:02.780033] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780048] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf62eb0) 00:25:46.221 [2024-07-10 10:56:02.780059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.221 [2024-07-10 10:56:02.780081] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc500, cid 4, qid 0 00:25:46.221 [2024-07-10 10:56:02.780224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.221 [2024-07-10 10:56:02.780240] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.221 [2024-07-10 10:56:02.780248] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780258] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf62eb0): datao=0, datal=4096, cccid=4 00:25:46.221 [2024-07-10 10:56:02.780270] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfbc500) on tqpair(0xf62eb0): expected_datao=0, payload_size=4096 00:25:46.221 [2024-07-10 10:56:02.780287] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780299] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780324] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.221 [2024-07-10 10:56:02.780338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.221 [2024-07-10 10:56:02.780345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780352] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc500) on tqpair=0xf62eb0 00:25:46.221 [2024-07-10 10:56:02.780365] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:46.221 [2024-07-10 10:56:02.780381] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:46.221 [2024-07-10 10:56:02.780399] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:46.221 [2024-07-10 10:56:02.780410] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:46.221 [2024-07-10 10:56:02.780419] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:46.221 [2024-07-10 10:56:02.780509] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:46.221 [2024-07-10 10:56:02.780522] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:46.221 [2024-07-10 10:56:02.780535] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:46.221 [2024-07-10 10:56:02.780555] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780564] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf62eb0) 00:25:46.221 [2024-07-10 10:56:02.780582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.221 [2024-07-10 10:56:02.780593] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780600] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780607] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf62eb0) 00:25:46.221 [2024-07-10 10:56:02.780616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.221 [2024-07-10 10:56:02.780642] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc500, cid 4, qid 0 00:25:46.221 [2024-07-10 10:56:02.780654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc660, cid 5, qid 0 00:25:46.221 [2024-07-10 10:56:02.780795] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.221 [2024-07-10 10:56:02.780811] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.221 [2024-07-10 10:56:02.780818] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780825] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc500) on tqpair=0xf62eb0 00:25:46.221 [2024-07-10 10:56:02.780835] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.221 [2024-07-10 10:56:02.780848] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.221 [2024-07-10 10:56:02.780856] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780863] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc660) on tqpair=0xf62eb0 00:25:46.221 [2024-07-10 10:56:02.780879] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780889] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.780898] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf62eb0) 00:25:46.221 [2024-07-10 10:56:02.780909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.221 [2024-07-10 10:56:02.780945] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc660, cid 5, qid 0 00:25:46.221 [2024-07-10 10:56:02.781156] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.221 [2024-07-10 10:56:02.781173] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.221 [2024-07-10 10:56:02.781180] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.781187] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc660) on tqpair=0xf62eb0 00:25:46.221 [2024-07-10 10:56:02.781205] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.781216] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.221 [2024-07-10 10:56:02.781223] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf62eb0) 00:25:46.221 [2024-07-10 10:56:02.781233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.221 [2024-07-10 10:56:02.781255] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc660, cid 5, qid 0 00:25:46.221 [2024-07-10 10:56:02.781378] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.221 [2024-07-10 10:56:02.781396] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.222 [2024-07-10 10:56:02.781404] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781414] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc660) on tqpair=0xf62eb0 00:25:46.222 [2024-07-10 10:56:02.781442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781461] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf62eb0) 00:25:46.222 [2024-07-10 10:56:02.781471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.222 [2024-07-10 10:56:02.781493] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc660, cid 5, qid 0 00:25:46.222 [2024-07-10 10:56:02.781625] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.222 [2024-07-10 10:56:02.781641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.222 [2024-07-10 10:56:02.781648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781655] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc660) on tqpair=0xf62eb0 00:25:46.222 [2024-07-10 10:56:02.781676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781688] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781694] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf62eb0) 00:25:46.222 [2024-07-10 10:56:02.781705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.222 [2024-07-10 10:56:02.781717] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781725] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781731] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf62eb0) 00:25:46.222 [2024-07-10 10:56:02.781740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.222 [2024-07-10 10:56:02.781752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xf62eb0) 00:25:46.222 [2024-07-10 10:56:02.781790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.222 [2024-07-10 10:56:02.781802] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.781815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf62eb0) 00:25:46.222 [2024-07-10 10:56:02.781824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.222 [2024-07-10 10:56:02.781860] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc660, cid 5, qid 0 00:25:46.222 [2024-07-10 10:56:02.781871] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc500, cid 4, qid 0 00:25:46.222 [2024-07-10 10:56:02.781879] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc7c0, cid 6, qid 0 00:25:46.222 [2024-07-10 10:56:02.781886] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc920, cid 7, qid 0 00:25:46.222 [2024-07-10 10:56:02.782173] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.222 [2024-07-10 10:56:02.782195] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.222 [2024-07-10 10:56:02.782207] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782218] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf62eb0): datao=0, datal=8192, cccid=5 00:25:46.222 [2024-07-10 10:56:02.782234] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfbc660) on tqpair(0xf62eb0): expected_datao=0, payload_size=8192 00:25:46.222 [2024-07-10 10:56:02.782319] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782332] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782342] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.222 [2024-07-10 10:56:02.782351] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.222 [2024-07-10 10:56:02.782360] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782371] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf62eb0): datao=0, datal=512, cccid=4 00:25:46.222 [2024-07-10 10:56:02.782382] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfbc500) on tqpair(0xf62eb0): expected_datao=0, payload_size=512 00:25:46.222 [2024-07-10 10:56:02.782398] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782411] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.222 [2024-07-10 10:56:02.782448] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.222 [2024-07-10 10:56:02.782455] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782461] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf62eb0): datao=0, datal=512, cccid=6 00:25:46.222 [2024-07-10 10:56:02.782469] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfbc7c0) on tqpair(0xf62eb0): expected_datao=0, payload_size=512 00:25:46.222 [2024-07-10 10:56:02.782479] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782486] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782495] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:46.222 [2024-07-10 10:56:02.782504] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:46.222 [2024-07-10 10:56:02.782511] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782517] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf62eb0): datao=0, datal=4096, cccid=7 00:25:46.222 [2024-07-10 10:56:02.782524] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfbc920) on tqpair(0xf62eb0): expected_datao=0, payload_size=4096 00:25:46.222 [2024-07-10 10:56:02.782535] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782542] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782554] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.222 [2024-07-10 10:56:02.782564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.222 [2024-07-10 10:56:02.782571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782578] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc660) on tqpair=0xf62eb0 00:25:46.222 [2024-07-10 10:56:02.782598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.222 [2024-07-10 10:56:02.782609] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.222 [2024-07-10 10:56:02.782616] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782623] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc500) on tqpair=0xf62eb0 00:25:46.222 [2024-07-10 10:56:02.782637] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.222 [2024-07-10 10:56:02.782647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.222 [2024-07-10 10:56:02.782654] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc7c0) on tqpair=0xf62eb0 00:25:46.222 [2024-07-10 10:56:02.782671] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.222 [2024-07-10 10:56:02.782681] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.222 [2024-07-10 10:56:02.782691] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.222 [2024-07-10 10:56:02.782698] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc920) on tqpair=0xf62eb0 00:25:46.222 ===================================================== 00:25:46.222 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:46.222 ===================================================== 00:25:46.222 Controller Capabilities/Features 00:25:46.222 ================================ 00:25:46.222 Vendor ID: 8086 00:25:46.222 Subsystem Vendor ID: 8086 00:25:46.222 Serial Number: SPDK00000000000001 00:25:46.222 Model Number: SPDK bdev Controller 00:25:46.222 Firmware Version: 24.01.1 00:25:46.222 Recommended Arb Burst: 6 00:25:46.222 IEEE OUI Identifier: e4 d2 5c 00:25:46.222 Multi-path I/O 00:25:46.222 May have multiple subsystem ports: Yes 00:25:46.222 May have multiple controllers: Yes 00:25:46.222 Associated with SR-IOV VF: No 00:25:46.222 Max Data Transfer Size: 131072 00:25:46.222 Max Number of Namespaces: 32 00:25:46.222 Max Number of I/O Queues: 127 00:25:46.222 NVMe Specification Version (VS): 1.3 00:25:46.222 NVMe Specification Version (Identify): 1.3 00:25:46.222 Maximum Queue Entries: 128 00:25:46.222 Contiguous Queues Required: Yes 00:25:46.222 Arbitration Mechanisms Supported 00:25:46.222 Weighted Round Robin: Not Supported 00:25:46.222 Vendor Specific: Not Supported 00:25:46.222 Reset Timeout: 15000 ms 00:25:46.222 Doorbell Stride: 4 bytes 00:25:46.222 NVM Subsystem Reset: Not Supported 00:25:46.222 Command Sets Supported 00:25:46.222 NVM Command Set: Supported 00:25:46.222 Boot Partition: Not Supported 00:25:46.222 Memory Page Size Minimum: 4096 bytes 00:25:46.222 Memory Page Size Maximum: 4096 bytes 00:25:46.222 Persistent Memory Region: Not Supported 00:25:46.222 Optional Asynchronous Events Supported 00:25:46.222 Namespace Attribute Notices: Supported 00:25:46.222 Firmware Activation Notices: Not Supported 00:25:46.222 ANA Change Notices: Not Supported 00:25:46.222 PLE Aggregate Log Change Notices: Not Supported 00:25:46.222 LBA Status Info Alert Notices: Not Supported 00:25:46.222 EGE Aggregate Log Change Notices: Not Supported 00:25:46.222 Normal NVM Subsystem Shutdown event: Not Supported 00:25:46.222 Zone Descriptor Change Notices: Not Supported 00:25:46.222 Discovery Log Change Notices: Not Supported 00:25:46.222 Controller Attributes 00:25:46.222 128-bit Host Identifier: Supported 00:25:46.222 Non-Operational Permissive Mode: Not Supported 00:25:46.222 NVM Sets: Not Supported 00:25:46.222 Read Recovery Levels: Not Supported 00:25:46.222 Endurance Groups: Not Supported 00:25:46.222 Predictable Latency Mode: Not Supported 00:25:46.222 Traffic Based Keep ALive: Not Supported 00:25:46.222 Namespace Granularity: Not Supported 00:25:46.222 SQ Associations: Not Supported 00:25:46.222 UUID List: Not Supported 00:25:46.222 Multi-Domain Subsystem: Not Supported 00:25:46.222 Fixed Capacity Management: Not Supported 00:25:46.222 Variable Capacity Management: Not Supported 00:25:46.222 Delete Endurance Group: Not Supported 00:25:46.222 Delete NVM Set: Not Supported 00:25:46.222 Extended LBA Formats Supported: Not Supported 00:25:46.222 Flexible Data Placement Supported: Not Supported 00:25:46.222 00:25:46.222 Controller Memory Buffer Support 00:25:46.222 ================================ 00:25:46.222 Supported: No 00:25:46.222 00:25:46.223 Persistent Memory Region Support 00:25:46.223 ================================ 00:25:46.223 Supported: No 00:25:46.223 00:25:46.223 Admin Command Set Attributes 00:25:46.223 ============================ 00:25:46.223 Security Send/Receive: Not Supported 00:25:46.223 Format NVM: Not Supported 00:25:46.223 Firmware Activate/Download: Not Supported 00:25:46.223 Namespace Management: Not Supported 00:25:46.223 Device Self-Test: Not Supported 00:25:46.223 Directives: Not Supported 00:25:46.223 NVMe-MI: Not Supported 00:25:46.223 Virtualization Management: Not Supported 00:25:46.223 Doorbell Buffer Config: Not Supported 00:25:46.223 Get LBA Status Capability: Not Supported 00:25:46.223 Command & Feature Lockdown Capability: Not Supported 00:25:46.223 Abort Command Limit: 4 00:25:46.223 Async Event Request Limit: 4 00:25:46.223 Number of Firmware Slots: N/A 00:25:46.223 Firmware Slot 1 Read-Only: N/A 00:25:46.223 Firmware Activation Without Reset: N/A 00:25:46.223 Multiple Update Detection Support: N/A 00:25:46.223 Firmware Update Granularity: No Information Provided 00:25:46.223 Per-Namespace SMART Log: No 00:25:46.223 Asymmetric Namespace Access Log Page: Not Supported 00:25:46.223 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:46.223 Command Effects Log Page: Supported 00:25:46.223 Get Log Page Extended Data: Supported 00:25:46.223 Telemetry Log Pages: Not Supported 00:25:46.223 Persistent Event Log Pages: Not Supported 00:25:46.223 Supported Log Pages Log Page: May Support 00:25:46.223 Commands Supported & Effects Log Page: Not Supported 00:25:46.223 Feature Identifiers & Effects Log Page:May Support 00:25:46.223 NVMe-MI Commands & Effects Log Page: May Support 00:25:46.223 Data Area 4 for Telemetry Log: Not Supported 00:25:46.223 Error Log Page Entries Supported: 128 00:25:46.223 Keep Alive: Supported 00:25:46.223 Keep Alive Granularity: 10000 ms 00:25:46.223 00:25:46.223 NVM Command Set Attributes 00:25:46.223 ========================== 00:25:46.223 Submission Queue Entry Size 00:25:46.223 Max: 64 00:25:46.223 Min: 64 00:25:46.223 Completion Queue Entry Size 00:25:46.223 Max: 16 00:25:46.223 Min: 16 00:25:46.223 Number of Namespaces: 32 00:25:46.223 Compare Command: Supported 00:25:46.223 Write Uncorrectable Command: Not Supported 00:25:46.223 Dataset Management Command: Supported 00:25:46.223 Write Zeroes Command: Supported 00:25:46.223 Set Features Save Field: Not Supported 00:25:46.223 Reservations: Supported 00:25:46.223 Timestamp: Not Supported 00:25:46.223 Copy: Supported 00:25:46.223 Volatile Write Cache: Present 00:25:46.223 Atomic Write Unit (Normal): 1 00:25:46.223 Atomic Write Unit (PFail): 1 00:25:46.223 Atomic Compare & Write Unit: 1 00:25:46.223 Fused Compare & Write: Supported 00:25:46.223 Scatter-Gather List 00:25:46.223 SGL Command Set: Supported 00:25:46.223 SGL Keyed: Supported 00:25:46.223 SGL Bit Bucket Descriptor: Not Supported 00:25:46.223 SGL Metadata Pointer: Not Supported 00:25:46.223 Oversized SGL: Not Supported 00:25:46.223 SGL Metadata Address: Not Supported 00:25:46.223 SGL Offset: Supported 00:25:46.223 Transport SGL Data Block: Not Supported 00:25:46.223 Replay Protected Memory Block: Not Supported 00:25:46.223 00:25:46.223 Firmware Slot Information 00:25:46.223 ========================= 00:25:46.223 Active slot: 1 00:25:46.223 Slot 1 Firmware Revision: 24.01.1 00:25:46.223 00:25:46.223 00:25:46.223 Commands Supported and Effects 00:25:46.223 ============================== 00:25:46.223 Admin Commands 00:25:46.223 -------------- 00:25:46.223 Get Log Page (02h): Supported 00:25:46.223 Identify (06h): Supported 00:25:46.223 Abort (08h): Supported 00:25:46.223 Set Features (09h): Supported 00:25:46.223 Get Features (0Ah): Supported 00:25:46.223 Asynchronous Event Request (0Ch): Supported 00:25:46.223 Keep Alive (18h): Supported 00:25:46.223 I/O Commands 00:25:46.223 ------------ 00:25:46.223 Flush (00h): Supported LBA-Change 00:25:46.223 Write (01h): Supported LBA-Change 00:25:46.223 Read (02h): Supported 00:25:46.223 Compare (05h): Supported 00:25:46.223 Write Zeroes (08h): Supported LBA-Change 00:25:46.223 Dataset Management (09h): Supported LBA-Change 00:25:46.223 Copy (19h): Supported LBA-Change 00:25:46.223 Unknown (79h): Supported LBA-Change 00:25:46.223 Unknown (7Ah): Supported 00:25:46.223 00:25:46.223 Error Log 00:25:46.223 ========= 00:25:46.223 00:25:46.223 Arbitration 00:25:46.223 =========== 00:25:46.223 Arbitration Burst: 1 00:25:46.223 00:25:46.223 Power Management 00:25:46.223 ================ 00:25:46.223 Number of Power States: 1 00:25:46.223 Current Power State: Power State #0 00:25:46.223 Power State #0: 00:25:46.223 Max Power: 0.00 W 00:25:46.223 Non-Operational State: Operational 00:25:46.223 Entry Latency: Not Reported 00:25:46.223 Exit Latency: Not Reported 00:25:46.223 Relative Read Throughput: 0 00:25:46.223 Relative Read Latency: 0 00:25:46.223 Relative Write Throughput: 0 00:25:46.223 Relative Write Latency: 0 00:25:46.223 Idle Power: Not Reported 00:25:46.223 Active Power: Not Reported 00:25:46.223 Non-Operational Permissive Mode: Not Supported 00:25:46.223 00:25:46.223 Health Information 00:25:46.223 ================== 00:25:46.223 Critical Warnings: 00:25:46.223 Available Spare Space: OK 00:25:46.223 Temperature: OK 00:25:46.223 Device Reliability: OK 00:25:46.223 Read Only: No 00:25:46.223 Volatile Memory Backup: OK 00:25:46.223 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:46.223 Temperature Threshold: [2024-07-10 10:56:02.782833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.223 [2024-07-10 10:56:02.782845] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.223 [2024-07-10 10:56:02.782852] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf62eb0) 00:25:46.223 [2024-07-10 10:56:02.782862] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.223 [2024-07-10 10:56:02.782884] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc920, cid 7, qid 0 00:25:46.223 [2024-07-10 10:56:02.783118] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.223 [2024-07-10 10:56:02.783134] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.223 [2024-07-10 10:56:02.783141] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.223 [2024-07-10 10:56:02.783148] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc920) on tqpair=0xf62eb0 00:25:46.223 [2024-07-10 10:56:02.783194] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:46.223 [2024-07-10 10:56:02.783218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.223 [2024-07-10 10:56:02.783233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.223 [2024-07-10 10:56:02.783243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.223 [2024-07-10 10:56:02.783252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.223 [2024-07-10 10:56:02.783279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.223 [2024-07-10 10:56:02.783288] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.223 [2024-07-10 10:56:02.783294] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.223 [2024-07-10 10:56:02.783304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.223 [2024-07-10 10:56:02.783326] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.223 [2024-07-10 10:56:02.787440] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.223 [2024-07-10 10:56:02.787458] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.223 [2024-07-10 10:56:02.787480] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.223 [2024-07-10 10:56:02.787488] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.223 [2024-07-10 10:56:02.787500] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.223 [2024-07-10 10:56:02.787509] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.223 [2024-07-10 10:56:02.787515] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.223 [2024-07-10 10:56:02.787526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.223 [2024-07-10 10:56:02.787558] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.223 [2024-07-10 10:56:02.787733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.223 [2024-07-10 10:56:02.787750] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.223 [2024-07-10 10:56:02.787757] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.223 [2024-07-10 10:56:02.787764] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.223 [2024-07-10 10:56:02.787772] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:46.223 [2024-07-10 10:56:02.787784] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:46.223 [2024-07-10 10:56:02.787804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.223 [2024-07-10 10:56:02.787814] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.223 [2024-07-10 10:56:02.787821] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.223 [2024-07-10 10:56:02.787832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.223 [2024-07-10 10:56:02.787853] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.223 [2024-07-10 10:56:02.788024] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.223 [2024-07-10 10:56:02.788041] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.788048] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788054] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.788073] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788085] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788092] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.788102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.788123] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.788262] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.788278] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.788285] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788294] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.788313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788322] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788328] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.788340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.788364] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.788494] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.788511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.788518] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788525] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.788543] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788554] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788561] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.788572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.788594] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.788732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.788748] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.788755] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.788788] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788798] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.788804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.788817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.788840] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.788970] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.788988] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.788996] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789002] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.789019] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789029] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789038] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.789049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.789070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.789206] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.789222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.789229] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789238] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.789256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789266] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789272] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.789283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.789307] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.789436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.789453] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.789460] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789467] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.789486] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789496] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789503] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.789514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.789535] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.789674] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.789690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.789697] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789706] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.789729] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789739] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.789758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.789781] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.789917] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.789933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.789940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789947] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.789965] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789976] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.789983] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.789993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.790014] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.790152] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.790169] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.790176] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.790183] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.790201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.790213] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.790219] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.790230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.790252] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.790375] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.790392] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.790399] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.790406] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.790431] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.790446] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.790455] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.790469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.790496] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.790668] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.790684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.790691] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.790698] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.790715] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.790731] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.790738] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.790749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.224 [2024-07-10 10:56:02.790786] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.224 [2024-07-10 10:56:02.790991] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.224 [2024-07-10 10:56:02.791008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.224 [2024-07-10 10:56:02.791018] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.791025] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.224 [2024-07-10 10:56:02.791042] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.791051] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.224 [2024-07-10 10:56:02.791061] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.224 [2024-07-10 10:56:02.791072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.225 [2024-07-10 10:56:02.791094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.225 [2024-07-10 10:56:02.791231] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.225 [2024-07-10 10:56:02.791247] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.225 [2024-07-10 10:56:02.791254] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.225 [2024-07-10 10:56:02.791261] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.225 [2024-07-10 10:56:02.791280] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.225 [2024-07-10 10:56:02.791290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.225 [2024-07-10 10:56:02.791297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.225 [2024-07-10 10:56:02.791308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.225 [2024-07-10 10:56:02.791329] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.225 [2024-07-10 10:56:02.795435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.225 [2024-07-10 10:56:02.795453] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.225 [2024-07-10 10:56:02.795460] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.225 [2024-07-10 10:56:02.795466] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.225 [2024-07-10 10:56:02.795486] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:46.225 [2024-07-10 10:56:02.795497] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:46.225 [2024-07-10 10:56:02.795503] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf62eb0) 00:25:46.225 [2024-07-10 10:56:02.795514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.225 [2024-07-10 10:56:02.795536] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfbc3a0, cid 3, qid 0 00:25:46.225 [2024-07-10 10:56:02.795726] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:46.225 [2024-07-10 10:56:02.795743] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:46.225 [2024-07-10 10:56:02.795750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:46.225 [2024-07-10 10:56:02.795757] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xfbc3a0) on tqpair=0xf62eb0 00:25:46.225 [2024-07-10 10:56:02.795771] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:25:46.225 0 Kelvin (-273 Celsius) 00:25:46.225 Available Spare: 0% 00:25:46.225 Available Spare Threshold: 0% 00:25:46.225 Life Percentage Used: 0% 00:25:46.225 Data Units Read: 0 00:25:46.225 Data Units Written: 0 00:25:46.225 Host Read Commands: 0 00:25:46.225 Host Write Commands: 0 00:25:46.225 Controller Busy Time: 0 minutes 00:25:46.225 Power Cycles: 0 00:25:46.225 Power On Hours: 0 hours 00:25:46.225 Unsafe Shutdowns: 0 00:25:46.225 Unrecoverable Media Errors: 0 00:25:46.225 Lifetime Error Log Entries: 0 00:25:46.225 Warning Temperature Time: 0 minutes 00:25:46.225 Critical Temperature Time: 0 minutes 00:25:46.225 00:25:46.225 Number of Queues 00:25:46.225 ================ 00:25:46.225 Number of I/O Submission Queues: 127 00:25:46.225 Number of I/O Completion Queues: 127 00:25:46.225 00:25:46.225 Active Namespaces 00:25:46.225 ================= 00:25:46.225 Namespace ID:1 00:25:46.225 Error Recovery Timeout: Unlimited 00:25:46.225 Command Set Identifier: NVM (00h) 00:25:46.225 Deallocate: Supported 00:25:46.225 Deallocated/Unwritten Error: Not Supported 00:25:46.225 Deallocated Read Value: Unknown 00:25:46.225 Deallocate in Write Zeroes: Not Supported 00:25:46.225 Deallocated Guard Field: 0xFFFF 00:25:46.225 Flush: Supported 00:25:46.225 Reservation: Supported 00:25:46.225 Namespace Sharing Capabilities: Multiple Controllers 00:25:46.225 Size (in LBAs): 131072 (0GiB) 00:25:46.225 Capacity (in LBAs): 131072 (0GiB) 00:25:46.225 Utilization (in LBAs): 131072 (0GiB) 00:25:46.225 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:46.225 EUI64: ABCDEF0123456789 00:25:46.225 UUID: 6de80921-8737-463c-9736-7c9e08d34f3d 00:25:46.225 Thin Provisioning: Not Supported 00:25:46.225 Per-NS Atomic Units: Yes 00:25:46.225 Atomic Boundary Size (Normal): 0 00:25:46.225 Atomic Boundary Size (PFail): 0 00:25:46.225 Atomic Boundary Offset: 0 00:25:46.225 Maximum Single Source Range Length: 65535 00:25:46.225 Maximum Copy Length: 65535 00:25:46.225 Maximum Source Range Count: 1 00:25:46.225 NGUID/EUI64 Never Reused: No 00:25:46.225 Namespace Write Protected: No 00:25:46.225 Number of LBA Formats: 1 00:25:46.225 Current LBA Format: LBA Format #00 00:25:46.225 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:46.225 00:25:46.225 10:56:02 -- host/identify.sh@51 -- # sync 00:25:46.225 10:56:02 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:46.225 10:56:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.225 10:56:02 -- common/autotest_common.sh@10 -- # set +x 00:25:46.225 10:56:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.225 10:56:02 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:46.225 10:56:02 -- host/identify.sh@56 -- # nvmftestfini 00:25:46.225 10:56:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:46.225 10:56:02 -- nvmf/common.sh@116 -- # sync 00:25:46.225 10:56:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:46.225 10:56:02 -- nvmf/common.sh@119 -- # set +e 00:25:46.225 10:56:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:46.225 10:56:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:46.225 rmmod nvme_tcp 00:25:46.225 rmmod nvme_fabrics 00:25:46.225 rmmod nvme_keyring 00:25:46.225 10:56:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:46.225 10:56:02 -- nvmf/common.sh@123 -- # set -e 00:25:46.225 10:56:02 -- nvmf/common.sh@124 -- # return 0 00:25:46.225 10:56:02 -- nvmf/common.sh@477 -- # '[' -n 3541810 ']' 00:25:46.225 10:56:02 -- nvmf/common.sh@478 -- # killprocess 3541810 00:25:46.225 10:56:02 -- common/autotest_common.sh@926 -- # '[' -z 3541810 ']' 00:25:46.225 10:56:02 -- common/autotest_common.sh@930 -- # kill -0 3541810 00:25:46.225 10:56:02 -- common/autotest_common.sh@931 -- # uname 00:25:46.225 10:56:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:46.225 10:56:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3541810 00:25:46.225 10:56:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:46.225 10:56:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:46.225 10:56:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3541810' 00:25:46.225 killing process with pid 3541810 00:25:46.225 10:56:02 -- common/autotest_common.sh@945 -- # kill 3541810 00:25:46.225 [2024-07-10 10:56:02.917464] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:46.225 10:56:02 -- common/autotest_common.sh@950 -- # wait 3541810 00:25:46.483 10:56:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:46.483 10:56:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:46.483 10:56:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:46.483 10:56:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:46.483 10:56:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:46.483 10:56:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.483 10:56:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.483 10:56:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.449 10:56:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:48.449 00:25:48.449 real 0m6.031s 00:25:48.449 user 0m7.155s 00:25:48.449 sys 0m1.878s 00:25:48.449 10:56:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.449 10:56:05 -- common/autotest_common.sh@10 -- # set +x 00:25:48.449 ************************************ 00:25:48.449 END TEST nvmf_identify 00:25:48.449 ************************************ 00:25:48.449 10:56:05 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:48.449 10:56:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:48.449 10:56:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:48.449 10:56:05 -- common/autotest_common.sh@10 -- # set +x 00:25:48.449 ************************************ 00:25:48.449 START TEST nvmf_perf 00:25:48.449 ************************************ 00:25:48.449 10:56:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:48.707 * Looking for test storage... 00:25:48.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.707 10:56:05 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.707 10:56:05 -- nvmf/common.sh@7 -- # uname -s 00:25:48.707 10:56:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.707 10:56:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.707 10:56:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.707 10:56:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.707 10:56:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.707 10:56:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.707 10:56:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.707 10:56:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.707 10:56:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.707 10:56:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.707 10:56:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.707 10:56:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.707 10:56:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.707 10:56:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.707 10:56:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.707 10:56:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.707 10:56:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.707 10:56:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.707 10:56:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.707 10:56:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.707 10:56:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.707 10:56:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.707 10:56:05 -- paths/export.sh@5 -- # export PATH 00:25:48.707 10:56:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.707 10:56:05 -- nvmf/common.sh@46 -- # : 0 00:25:48.707 10:56:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:48.707 10:56:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:48.707 10:56:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:48.707 10:56:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.707 10:56:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.707 10:56:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:48.707 10:56:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:48.707 10:56:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:48.707 10:56:05 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:48.707 10:56:05 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:48.707 10:56:05 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:48.707 10:56:05 -- host/perf.sh@17 -- # nvmftestinit 00:25:48.707 10:56:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:48.707 10:56:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.707 10:56:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:48.707 10:56:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:48.707 10:56:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:48.707 10:56:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.707 10:56:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.707 10:56:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.707 10:56:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:48.707 10:56:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:48.707 10:56:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:48.707 10:56:05 -- common/autotest_common.sh@10 -- # set +x 00:25:50.608 10:56:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:50.608 10:56:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:50.608 10:56:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:50.608 10:56:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:50.608 10:56:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:50.608 10:56:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:50.608 10:56:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:50.608 10:56:07 -- nvmf/common.sh@294 -- # net_devs=() 00:25:50.608 10:56:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:50.608 10:56:07 -- nvmf/common.sh@295 -- # e810=() 00:25:50.608 10:56:07 -- nvmf/common.sh@295 -- # local -ga e810 00:25:50.608 10:56:07 -- nvmf/common.sh@296 -- # x722=() 00:25:50.608 10:56:07 -- nvmf/common.sh@296 -- # local -ga x722 00:25:50.608 10:56:07 -- nvmf/common.sh@297 -- # mlx=() 00:25:50.608 10:56:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:50.608 10:56:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.608 10:56:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.608 10:56:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.608 10:56:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.608 10:56:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.608 10:56:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.608 10:56:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.608 10:56:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.608 10:56:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.608 10:56:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.608 10:56:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.608 10:56:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:50.608 10:56:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:50.608 10:56:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:50.608 10:56:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:50.608 10:56:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:50.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:50.608 10:56:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:50.608 10:56:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:50.608 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:50.608 10:56:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:50.608 10:56:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:50.608 10:56:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.608 10:56:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:50.608 10:56:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.608 10:56:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:50.608 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:50.608 10:56:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.608 10:56:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:50.608 10:56:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.608 10:56:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:50.608 10:56:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.608 10:56:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:50.608 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:50.608 10:56:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.608 10:56:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:50.608 10:56:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:50.608 10:56:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:50.608 10:56:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:50.608 10:56:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.608 10:56:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.609 10:56:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.609 10:56:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:50.609 10:56:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.609 10:56:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.609 10:56:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:50.609 10:56:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.609 10:56:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.609 10:56:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:50.609 10:56:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:50.609 10:56:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.609 10:56:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.609 10:56:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.609 10:56:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.609 10:56:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:50.609 10:56:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.609 10:56:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.609 10:56:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.609 10:56:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:50.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:25:50.609 00:25:50.609 --- 10.0.0.2 ping statistics --- 00:25:50.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.609 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:25:50.609 10:56:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:25:50.609 00:25:50.609 --- 10.0.0.1 ping statistics --- 00:25:50.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.609 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:25:50.609 10:56:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.609 10:56:07 -- nvmf/common.sh@410 -- # return 0 00:25:50.609 10:56:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:50.609 10:56:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.609 10:56:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:50.609 10:56:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:50.609 10:56:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.609 10:56:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:50.609 10:56:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:50.609 10:56:07 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:50.609 10:56:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:50.609 10:56:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:50.609 10:56:07 -- common/autotest_common.sh@10 -- # set +x 00:25:50.609 10:56:07 -- nvmf/common.sh@469 -- # nvmfpid=3543920 00:25:50.609 10:56:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:50.609 10:56:07 -- nvmf/common.sh@470 -- # waitforlisten 3543920 00:25:50.609 10:56:07 -- common/autotest_common.sh@819 -- # '[' -z 3543920 ']' 00:25:50.609 10:56:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.609 10:56:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:50.609 10:56:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.609 10:56:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:50.609 10:56:07 -- common/autotest_common.sh@10 -- # set +x 00:25:50.609 [2024-07-10 10:56:07.421099] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:50.609 [2024-07-10 10:56:07.421173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.867 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.867 [2024-07-10 10:56:07.483589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.867 [2024-07-10 10:56:07.573543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:50.867 [2024-07-10 10:56:07.573686] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.867 [2024-07-10 10:56:07.573703] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.867 [2024-07-10 10:56:07.573716] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.867 [2024-07-10 10:56:07.573785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.867 [2024-07-10 10:56:07.573820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.867 [2024-07-10 10:56:07.573844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.867 [2024-07-10 10:56:07.573847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.801 10:56:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:51.801 10:56:08 -- common/autotest_common.sh@852 -- # return 0 00:25:51.801 10:56:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:51.801 10:56:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:51.801 10:56:08 -- common/autotest_common.sh@10 -- # set +x 00:25:51.801 10:56:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.801 10:56:08 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:51.801 10:56:08 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:55.076 10:56:11 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:55.076 10:56:11 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:55.076 10:56:11 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:25:55.076 10:56:11 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:55.333 10:56:12 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:55.333 10:56:12 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:25:55.333 10:56:12 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:55.333 10:56:12 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:55.333 10:56:12 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:55.589 [2024-07-10 10:56:12.301104] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.589 10:56:12 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:55.845 10:56:12 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:55.845 10:56:12 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:56.102 10:56:12 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:56.102 10:56:12 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:56.358 10:56:13 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:56.614 [2024-07-10 10:56:13.268784] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.614 10:56:13 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:56.872 10:56:13 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:25:56.872 10:56:13 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:56.872 10:56:13 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:56.872 10:56:13 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:58.244 Initializing NVMe Controllers 00:25:58.244 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:25:58.244 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:25:58.244 Initialization complete. Launching workers. 00:25:58.244 ======================================================== 00:25:58.244 Latency(us) 00:25:58.244 Device Information : IOPS MiB/s Average min max 00:25:58.244 PCIE (0000:88:00.0) NSID 1 from core 0: 86557.49 338.12 369.14 47.03 6274.67 00:25:58.244 ======================================================== 00:25:58.244 Total : 86557.49 338.12 369.14 47.03 6274.67 00:25:58.244 00:25:58.244 10:56:14 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:58.244 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.176 Initializing NVMe Controllers 00:25:59.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:59.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:59.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:59.176 Initialization complete. Launching workers. 00:25:59.176 ======================================================== 00:25:59.176 Latency(us) 00:25:59.176 Device Information : IOPS MiB/s Average min max 00:25:59.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 134.00 0.52 7630.09 186.45 46396.54 00:25:59.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 64.00 0.25 15726.00 7006.09 51847.46 00:25:59.176 ======================================================== 00:25:59.176 Total : 198.00 0.77 10246.95 186.45 51847.46 00:25:59.176 00:25:59.176 10:56:15 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:59.176 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.076 Initializing NVMe Controllers 00:26:01.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:01.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:01.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:01.076 Initialization complete. Launching workers. 00:26:01.076 ======================================================== 00:26:01.076 Latency(us) 00:26:01.076 Device Information : IOPS MiB/s Average min max 00:26:01.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6586.56 25.73 4858.55 1022.31 15075.53 00:26:01.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3889.97 15.20 8227.47 5764.19 16134.77 00:26:01.076 ======================================================== 00:26:01.076 Total : 10476.52 40.92 6109.44 1022.31 16134.77 00:26:01.076 00:26:01.076 10:56:17 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:01.076 10:56:17 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:01.076 10:56:17 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:01.076 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.607 Initializing NVMe Controllers 00:26:03.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:03.607 Controller IO queue size 128, less than required. 00:26:03.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.607 Controller IO queue size 128, less than required. 00:26:03.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:03.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:03.607 Initialization complete. Launching workers. 00:26:03.607 ======================================================== 00:26:03.607 Latency(us) 00:26:03.607 Device Information : IOPS MiB/s Average min max 00:26:03.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 896.42 224.10 148022.79 88763.51 224239.89 00:26:03.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 570.18 142.54 232580.19 94584.69 375107.76 00:26:03.607 ======================================================== 00:26:03.607 Total : 1466.60 366.65 180896.63 88763.51 375107.76 00:26:03.607 00:26:03.607 10:56:19 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:03.607 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.607 No valid NVMe controllers or AIO or URING devices found 00:26:03.607 Initializing NVMe Controllers 00:26:03.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:03.607 Controller IO queue size 128, less than required. 00:26:03.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.607 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:03.607 Controller IO queue size 128, less than required. 00:26:03.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.607 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:03.607 WARNING: Some requested NVMe devices were skipped 00:26:03.607 10:56:20 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:03.607 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.139 Initializing NVMe Controllers 00:26:06.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:06.139 Controller IO queue size 128, less than required. 00:26:06.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:06.139 Controller IO queue size 128, less than required. 00:26:06.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:06.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:06.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:06.139 Initialization complete. Launching workers. 00:26:06.139 00:26:06.139 ==================== 00:26:06.139 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:06.139 TCP transport: 00:26:06.139 polls: 23239 00:26:06.139 idle_polls: 9763 00:26:06.139 sock_completions: 13476 00:26:06.139 nvme_completions: 4227 00:26:06.139 submitted_requests: 6504 00:26:06.139 queued_requests: 1 00:26:06.139 00:26:06.139 ==================== 00:26:06.139 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:06.139 TCP transport: 00:26:06.139 polls: 20671 00:26:06.139 idle_polls: 7274 00:26:06.139 sock_completions: 13397 00:26:06.139 nvme_completions: 4491 00:26:06.139 submitted_requests: 6900 00:26:06.139 queued_requests: 1 00:26:06.139 ======================================================== 00:26:06.139 Latency(us) 00:26:06.139 Device Information : IOPS MiB/s Average min max 00:26:06.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1119.98 280.00 118028.61 57633.70 169543.89 00:26:06.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1185.98 296.50 110241.77 56687.81 151933.23 00:26:06.139 ======================================================== 00:26:06.139 Total : 2305.97 576.49 114023.75 56687.81 169543.89 00:26:06.139 00:26:06.139 10:56:22 -- host/perf.sh@66 -- # sync 00:26:06.139 10:56:22 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.139 10:56:22 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:06.139 10:56:22 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:26:06.139 10:56:22 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:09.415 10:56:25 -- host/perf.sh@72 -- # ls_guid=3c9232d6-b8de-494c-b8d9-6805cc87e8e3 00:26:09.415 10:56:25 -- host/perf.sh@73 -- # get_lvs_free_mb 3c9232d6-b8de-494c-b8d9-6805cc87e8e3 00:26:09.415 10:56:25 -- common/autotest_common.sh@1343 -- # local lvs_uuid=3c9232d6-b8de-494c-b8d9-6805cc87e8e3 00:26:09.415 10:56:25 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:09.415 10:56:25 -- common/autotest_common.sh@1345 -- # local fc 00:26:09.415 10:56:25 -- common/autotest_common.sh@1346 -- # local cs 00:26:09.415 10:56:25 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:09.415 10:56:26 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:09.415 { 00:26:09.415 "uuid": "3c9232d6-b8de-494c-b8d9-6805cc87e8e3", 00:26:09.415 "name": "lvs_0", 00:26:09.415 "base_bdev": "Nvme0n1", 00:26:09.415 "total_data_clusters": 238234, 00:26:09.415 "free_clusters": 238234, 00:26:09.415 "block_size": 512, 00:26:09.415 "cluster_size": 4194304 00:26:09.415 } 00:26:09.415 ]' 00:26:09.415 10:56:26 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="3c9232d6-b8de-494c-b8d9-6805cc87e8e3") .free_clusters' 00:26:09.672 10:56:26 -- common/autotest_common.sh@1348 -- # fc=238234 00:26:09.672 10:56:26 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="3c9232d6-b8de-494c-b8d9-6805cc87e8e3") .cluster_size' 00:26:09.672 10:56:26 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:09.672 10:56:26 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:26:09.672 10:56:26 -- common/autotest_common.sh@1353 -- # echo 952936 00:26:09.672 952936 00:26:09.672 10:56:26 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:26:09.672 10:56:26 -- host/perf.sh@78 -- # free_mb=20480 00:26:09.672 10:56:26 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3c9232d6-b8de-494c-b8d9-6805cc87e8e3 lbd_0 20480 00:26:10.237 10:56:26 -- host/perf.sh@80 -- # lb_guid=590c77c6-9214-44e1-9f27-4242e91e7a7e 00:26:10.237 10:56:26 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 590c77c6-9214-44e1-9f27-4242e91e7a7e lvs_n_0 00:26:11.170 10:56:27 -- host/perf.sh@83 -- # ls_nested_guid=c757cd4a-bfc2-4666-b40c-b96b3115bf90 00:26:11.170 10:56:27 -- host/perf.sh@84 -- # get_lvs_free_mb c757cd4a-bfc2-4666-b40c-b96b3115bf90 00:26:11.170 10:56:27 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c757cd4a-bfc2-4666-b40c-b96b3115bf90 00:26:11.170 10:56:27 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:11.171 10:56:27 -- common/autotest_common.sh@1345 -- # local fc 00:26:11.171 10:56:27 -- common/autotest_common.sh@1346 -- # local cs 00:26:11.171 10:56:27 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:11.171 10:56:27 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:11.171 { 00:26:11.171 "uuid": "3c9232d6-b8de-494c-b8d9-6805cc87e8e3", 00:26:11.171 "name": "lvs_0", 00:26:11.171 "base_bdev": "Nvme0n1", 00:26:11.171 "total_data_clusters": 238234, 00:26:11.171 "free_clusters": 233114, 00:26:11.171 "block_size": 512, 00:26:11.171 "cluster_size": 4194304 00:26:11.171 }, 00:26:11.171 { 00:26:11.171 "uuid": "c757cd4a-bfc2-4666-b40c-b96b3115bf90", 00:26:11.171 "name": "lvs_n_0", 00:26:11.171 "base_bdev": "590c77c6-9214-44e1-9f27-4242e91e7a7e", 00:26:11.171 "total_data_clusters": 5114, 00:26:11.171 "free_clusters": 5114, 00:26:11.171 "block_size": 512, 00:26:11.171 "cluster_size": 4194304 00:26:11.171 } 00:26:11.171 ]' 00:26:11.171 10:56:27 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c757cd4a-bfc2-4666-b40c-b96b3115bf90") .free_clusters' 00:26:11.171 10:56:27 -- common/autotest_common.sh@1348 -- # fc=5114 00:26:11.171 10:56:27 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c757cd4a-bfc2-4666-b40c-b96b3115bf90") .cluster_size' 00:26:11.171 10:56:27 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:11.171 10:56:27 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:26:11.171 10:56:27 -- common/autotest_common.sh@1353 -- # echo 20456 00:26:11.171 20456 00:26:11.171 10:56:27 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:26:11.171 10:56:27 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c757cd4a-bfc2-4666-b40c-b96b3115bf90 lbd_nest_0 20456 00:26:11.429 10:56:28 -- host/perf.sh@88 -- # lb_nested_guid=9096388b-8fa2-4c4c-b708-2fd1203091ae 00:26:11.429 10:56:28 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:11.686 10:56:28 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:11.686 10:56:28 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9096388b-8fa2-4c4c-b708-2fd1203091ae 00:26:11.944 10:56:28 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:12.201 10:56:28 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:12.201 10:56:28 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:12.201 10:56:28 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:12.201 10:56:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:12.201 10:56:28 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:12.201 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.392 Initializing NVMe Controllers 00:26:24.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:24.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:24.392 Initialization complete. Launching workers. 00:26:24.392 ======================================================== 00:26:24.392 Latency(us) 00:26:24.392 Device Information : IOPS MiB/s Average min max 00:26:24.392 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.60 0.02 20636.23 208.31 47393.31 00:26:24.392 ======================================================== 00:26:24.392 Total : 48.60 0.02 20636.23 208.31 47393.31 00:26:24.392 00:26:24.392 10:56:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:24.392 10:56:39 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:24.392 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.353 Initializing NVMe Controllers 00:26:34.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:34.353 Initialization complete. Launching workers. 00:26:34.353 ======================================================== 00:26:34.353 Latency(us) 00:26:34.353 Device Information : IOPS MiB/s Average min max 00:26:34.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.40 9.05 13830.22 5982.29 55880.18 00:26:34.353 ======================================================== 00:26:34.353 Total : 72.40 9.05 13830.22 5982.29 55880.18 00:26:34.353 00:26:34.353 10:56:49 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:34.353 10:56:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:34.353 10:56:49 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:34.353 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.320 Initializing NVMe Controllers 00:26:44.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:44.320 Initialization complete. Launching workers. 00:26:44.320 ======================================================== 00:26:44.320 Latency(us) 00:26:44.320 Device Information : IOPS MiB/s Average min max 00:26:44.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7564.05 3.69 4230.06 281.85 12011.73 00:26:44.320 ======================================================== 00:26:44.320 Total : 7564.05 3.69 4230.06 281.85 12011.73 00:26:44.320 00:26:44.320 10:56:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:44.320 10:56:59 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:44.320 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.409 Initializing NVMe Controllers 00:26:54.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:54.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:54.409 Initialization complete. Launching workers. 00:26:54.409 ======================================================== 00:26:54.409 Latency(us) 00:26:54.409 Device Information : IOPS MiB/s Average min max 00:26:54.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2180.48 272.56 14674.93 757.82 33567.61 00:26:54.410 ======================================================== 00:26:54.410 Total : 2180.48 272.56 14674.93 757.82 33567.61 00:26:54.410 00:26:54.410 10:57:10 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:54.410 10:57:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:54.410 10:57:10 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:54.410 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.384 Initializing NVMe Controllers 00:27:04.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.384 Controller IO queue size 128, less than required. 00:27:04.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:04.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:04.384 Initialization complete. Launching workers. 00:27:04.384 ======================================================== 00:27:04.384 Latency(us) 00:27:04.384 Device Information : IOPS MiB/s Average min max 00:27:04.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11855.30 5.79 10799.06 1918.95 54030.98 00:27:04.384 ======================================================== 00:27:04.384 Total : 11855.30 5.79 10799.06 1918.95 54030.98 00:27:04.384 00:27:04.384 10:57:20 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:04.384 10:57:20 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.384 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.345 Initializing NVMe Controllers 00:27:14.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:14.345 Controller IO queue size 128, less than required. 00:27:14.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:14.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:14.345 Initialization complete. Launching workers. 00:27:14.345 ======================================================== 00:27:14.345 Latency(us) 00:27:14.345 Device Information : IOPS MiB/s Average min max 00:27:14.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1225.67 153.21 104624.68 31147.61 191490.67 00:27:14.345 ======================================================== 00:27:14.345 Total : 1225.67 153.21 104624.68 31147.61 191490.67 00:27:14.345 00:27:14.345 10:57:31 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:14.603 10:57:31 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9096388b-8fa2-4c4c-b708-2fd1203091ae 00:27:15.537 10:57:32 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:15.537 10:57:32 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 590c77c6-9214-44e1-9f27-4242e91e7a7e 00:27:15.795 10:57:32 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:16.053 10:57:32 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:16.053 10:57:32 -- host/perf.sh@114 -- # nvmftestfini 00:27:16.053 10:57:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:16.053 10:57:32 -- nvmf/common.sh@116 -- # sync 00:27:16.053 10:57:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:16.053 10:57:32 -- nvmf/common.sh@119 -- # set +e 00:27:16.053 10:57:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:16.053 10:57:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:16.053 rmmod nvme_tcp 00:27:16.311 rmmod nvme_fabrics 00:27:16.311 rmmod nvme_keyring 00:27:16.311 10:57:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:16.311 10:57:32 -- nvmf/common.sh@123 -- # set -e 00:27:16.311 10:57:32 -- nvmf/common.sh@124 -- # return 0 00:27:16.311 10:57:32 -- nvmf/common.sh@477 -- # '[' -n 3543920 ']' 00:27:16.311 10:57:32 -- nvmf/common.sh@478 -- # killprocess 3543920 00:27:16.311 10:57:32 -- common/autotest_common.sh@926 -- # '[' -z 3543920 ']' 00:27:16.311 10:57:32 -- common/autotest_common.sh@930 -- # kill -0 3543920 00:27:16.311 10:57:32 -- common/autotest_common.sh@931 -- # uname 00:27:16.311 10:57:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:16.311 10:57:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3543920 00:27:16.311 10:57:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:16.311 10:57:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:16.311 10:57:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3543920' 00:27:16.311 killing process with pid 3543920 00:27:16.311 10:57:32 -- common/autotest_common.sh@945 -- # kill 3543920 00:27:16.311 10:57:32 -- common/autotest_common.sh@950 -- # wait 3543920 00:27:18.209 10:57:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:18.209 10:57:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:18.209 10:57:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:18.209 10:57:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.209 10:57:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:18.209 10:57:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.209 10:57:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.209 10:57:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.112 10:57:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:20.113 00:27:20.113 real 1m31.376s 00:27:20.113 user 5m34.072s 00:27:20.113 sys 0m17.013s 00:27:20.113 10:57:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.113 10:57:36 -- common/autotest_common.sh@10 -- # set +x 00:27:20.113 ************************************ 00:27:20.113 END TEST nvmf_perf 00:27:20.113 ************************************ 00:27:20.113 10:57:36 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:20.113 10:57:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:20.113 10:57:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:20.113 10:57:36 -- common/autotest_common.sh@10 -- # set +x 00:27:20.113 ************************************ 00:27:20.113 START TEST nvmf_fio_host 00:27:20.113 ************************************ 00:27:20.113 10:57:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:20.113 * Looking for test storage... 00:27:20.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.113 10:57:36 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.113 10:57:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.113 10:57:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.113 10:57:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.113 10:57:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.113 10:57:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.113 10:57:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.113 10:57:36 -- paths/export.sh@5 -- # export PATH 00:27:20.113 10:57:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.113 10:57:36 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.113 10:57:36 -- nvmf/common.sh@7 -- # uname -s 00:27:20.113 10:57:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.113 10:57:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.113 10:57:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.113 10:57:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.113 10:57:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.113 10:57:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.113 10:57:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.113 10:57:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.113 10:57:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.113 10:57:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.113 10:57:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.113 10:57:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.113 10:57:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.113 10:57:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.113 10:57:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.113 10:57:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.113 10:57:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.113 10:57:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.113 10:57:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.113 10:57:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.113 10:57:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.113 10:57:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.113 10:57:36 -- paths/export.sh@5 -- # export PATH 00:27:20.113 10:57:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.113 10:57:36 -- nvmf/common.sh@46 -- # : 0 00:27:20.113 10:57:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:20.113 10:57:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:20.113 10:57:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:20.113 10:57:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.113 10:57:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.113 10:57:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:20.113 10:57:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:20.113 10:57:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:20.113 10:57:36 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:20.113 10:57:36 -- host/fio.sh@14 -- # nvmftestinit 00:27:20.113 10:57:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:20.113 10:57:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.113 10:57:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:20.113 10:57:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:20.113 10:57:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:20.113 10:57:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.114 10:57:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.114 10:57:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.114 10:57:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:20.114 10:57:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:20.114 10:57:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:20.114 10:57:36 -- common/autotest_common.sh@10 -- # set +x 00:27:22.015 10:57:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:22.015 10:57:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:22.015 10:57:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:22.015 10:57:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:22.015 10:57:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:22.015 10:57:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:22.015 10:57:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:22.015 10:57:38 -- nvmf/common.sh@294 -- # net_devs=() 00:27:22.015 10:57:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:22.015 10:57:38 -- nvmf/common.sh@295 -- # e810=() 00:27:22.015 10:57:38 -- nvmf/common.sh@295 -- # local -ga e810 00:27:22.015 10:57:38 -- nvmf/common.sh@296 -- # x722=() 00:27:22.015 10:57:38 -- nvmf/common.sh@296 -- # local -ga x722 00:27:22.015 10:57:38 -- nvmf/common.sh@297 -- # mlx=() 00:27:22.015 10:57:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:22.015 10:57:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.015 10:57:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.015 10:57:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.015 10:57:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.015 10:57:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.015 10:57:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.015 10:57:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.015 10:57:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.015 10:57:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.015 10:57:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.015 10:57:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.015 10:57:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:22.015 10:57:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:22.015 10:57:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:22.015 10:57:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:22.015 10:57:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:22.015 10:57:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:22.015 10:57:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:22.015 10:57:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:22.015 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:22.015 10:57:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:22.015 10:57:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:22.015 10:57:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.015 10:57:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.015 10:57:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:22.015 10:57:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:22.015 10:57:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:22.015 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:22.016 10:57:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:22.016 10:57:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:22.016 10:57:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.016 10:57:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.016 10:57:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:22.016 10:57:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:22.016 10:57:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:22.016 10:57:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:22.016 10:57:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:22.016 10:57:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.016 10:57:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:22.016 10:57:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.016 10:57:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:22.016 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:22.016 10:57:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.016 10:57:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:22.016 10:57:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.016 10:57:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:22.016 10:57:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.016 10:57:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:22.016 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:22.016 10:57:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.016 10:57:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:22.016 10:57:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:22.016 10:57:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:22.016 10:57:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:22.016 10:57:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:22.016 10:57:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.016 10:57:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.016 10:57:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.016 10:57:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:22.016 10:57:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.016 10:57:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.016 10:57:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:22.016 10:57:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.016 10:57:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.016 10:57:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:22.016 10:57:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:22.016 10:57:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.016 10:57:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.016 10:57:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.016 10:57:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.016 10:57:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:22.016 10:57:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.016 10:57:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.016 10:57:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.016 10:57:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:22.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:27:22.016 00:27:22.016 --- 10.0.0.2 ping statistics --- 00:27:22.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.016 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:27:22.016 10:57:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:27:22.016 00:27:22.016 --- 10.0.0.1 ping statistics --- 00:27:22.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.016 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:27:22.016 10:57:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.016 10:57:38 -- nvmf/common.sh@410 -- # return 0 00:27:22.016 10:57:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:22.016 10:57:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.016 10:57:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:22.016 10:57:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:22.016 10:57:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.016 10:57:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:22.016 10:57:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:22.016 10:57:38 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:22.016 10:57:38 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:22.016 10:57:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:22.016 10:57:38 -- common/autotest_common.sh@10 -- # set +x 00:27:22.016 10:57:38 -- host/fio.sh@24 -- # nvmfpid=3556960 00:27:22.016 10:57:38 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:22.016 10:57:38 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:22.016 10:57:38 -- host/fio.sh@28 -- # waitforlisten 3556960 00:27:22.016 10:57:38 -- common/autotest_common.sh@819 -- # '[' -z 3556960 ']' 00:27:22.016 10:57:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.016 10:57:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:22.016 10:57:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.016 10:57:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:22.016 10:57:38 -- common/autotest_common.sh@10 -- # set +x 00:27:22.275 [2024-07-10 10:57:38.872386] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:22.275 [2024-07-10 10:57:38.872467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.275 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.275 [2024-07-10 10:57:38.940910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:22.275 [2024-07-10 10:57:39.030523] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:22.275 [2024-07-10 10:57:39.030683] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.275 [2024-07-10 10:57:39.030703] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.275 [2024-07-10 10:57:39.030717] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.275 [2024-07-10 10:57:39.030796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.275 [2024-07-10 10:57:39.030853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.275 [2024-07-10 10:57:39.030973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.275 [2024-07-10 10:57:39.030975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.209 10:57:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:23.209 10:57:39 -- common/autotest_common.sh@852 -- # return 0 00:27:23.209 10:57:39 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:23.209 [2024-07-10 10:57:40.032695] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.467 10:57:40 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:23.467 10:57:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:23.467 10:57:40 -- common/autotest_common.sh@10 -- # set +x 00:27:23.467 10:57:40 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:23.725 Malloc1 00:27:23.725 10:57:40 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:23.982 10:57:40 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:24.240 10:57:40 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.240 [2024-07-10 10:57:41.032232] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.240 10:57:41 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:24.498 10:57:41 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:24.498 10:57:41 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:24.498 10:57:41 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:24.498 10:57:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:24.498 10:57:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:24.498 10:57:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:24.498 10:57:41 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.498 10:57:41 -- common/autotest_common.sh@1320 -- # shift 00:27:24.498 10:57:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:24.498 10:57:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.498 10:57:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.498 10:57:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:24.498 10:57:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:24.498 10:57:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:24.498 10:57:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:24.498 10:57:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.498 10:57:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.498 10:57:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:24.498 10:57:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:24.498 10:57:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:24.498 10:57:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:24.498 10:57:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:24.498 10:57:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:24.756 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:24.756 fio-3.35 00:27:24.756 Starting 1 thread 00:27:24.756 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.283 00:27:27.283 test: (groupid=0, jobs=1): err= 0: pid=3557452: Wed Jul 10 10:57:43 2024 00:27:27.283 read: IOPS=9416, BW=36.8MiB/s (38.6MB/s)(73.8MiB/2006msec) 00:27:27.283 slat (nsec): min=1962, max=176905, avg=2580.42, stdev=1992.98 00:27:27.283 clat (usec): min=2097, max=12143, avg=7524.42, stdev=554.45 00:27:27.283 lat (usec): min=2124, max=12145, avg=7527.00, stdev=554.32 00:27:27.283 clat percentiles (usec): 00:27:27.283 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:27:27.283 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:27:27.283 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8160], 95.00th=[ 8356], 00:27:27.283 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[10683], 99.95th=[11600], 00:27:27.283 | 99.99th=[12125] 00:27:27.283 bw ( KiB/s): min=36624, max=38408, per=99.92%, avg=37634.00, stdev=756.03, samples=4 00:27:27.283 iops : min= 9156, max= 9602, avg=9408.50, stdev=189.01, samples=4 00:27:27.283 write: IOPS=9415, BW=36.8MiB/s (38.6MB/s)(73.8MiB/2006msec); 0 zone resets 00:27:27.283 slat (usec): min=2, max=152, avg= 2.68, stdev= 1.53 00:27:27.283 clat (usec): min=1637, max=11433, avg=6030.54, stdev=494.06 00:27:27.283 lat (usec): min=1647, max=11435, avg=6033.22, stdev=494.01 00:27:27.283 clat percentiles (usec): 00:27:27.283 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:27:27.283 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6128], 00:27:27.283 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6783], 00:27:27.284 | 99.00th=[ 7111], 99.50th=[ 7242], 99.90th=[ 9634], 99.95th=[10421], 00:27:27.284 | 99.99th=[11338] 00:27:27.284 bw ( KiB/s): min=37440, max=37888, per=100.00%, avg=37666.00, stdev=200.31, samples=4 00:27:27.284 iops : min= 9360, max= 9472, avg=9416.50, stdev=50.08, samples=4 00:27:27.284 lat (msec) : 2=0.02%, 4=0.10%, 10=99.76%, 20=0.12% 00:27:27.284 cpu : usr=57.16%, sys=37.01%, ctx=56, majf=0, minf=32 00:27:27.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:27.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:27.284 issued rwts: total=18889,18888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:27.284 00:27:27.284 Run status group 0 (all jobs): 00:27:27.284 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.8MiB (77.4MB), run=2006-2006msec 00:27:27.284 WRITE: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.8MiB (77.4MB), run=2006-2006msec 00:27:27.284 10:57:43 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:27.284 10:57:43 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:27.284 10:57:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:27.284 10:57:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:27.284 10:57:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:27.284 10:57:43 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:27.284 10:57:43 -- common/autotest_common.sh@1320 -- # shift 00:27:27.284 10:57:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:27.284 10:57:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:27.284 10:57:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:27.284 10:57:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:27.284 10:57:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:27.284 10:57:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:27.284 10:57:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:27.284 10:57:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:27.284 10:57:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:27.284 10:57:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:27.284 10:57:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:27.284 10:57:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:27.284 10:57:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:27.284 10:57:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:27.284 10:57:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:27.284 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:27.284 fio-3.35 00:27:27.284 Starting 1 thread 00:27:27.284 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.813 00:27:29.813 test: (groupid=0, jobs=1): err= 0: pid=3557801: Wed Jul 10 10:57:46 2024 00:27:29.813 read: IOPS=8241, BW=129MiB/s (135MB/s)(258MiB/2005msec) 00:27:29.813 slat (usec): min=2, max=128, avg= 3.62, stdev= 1.80 00:27:29.813 clat (usec): min=3221, max=17899, avg=9237.27, stdev=2345.84 00:27:29.813 lat (usec): min=3224, max=17903, avg=9240.89, stdev=2345.93 00:27:29.813 clat percentiles (usec): 00:27:29.813 | 1.00th=[ 4686], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7177], 00:27:29.813 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:27:29.813 | 70.00th=[10290], 80.00th=[11076], 90.00th=[12256], 95.00th=[13698], 00:27:29.813 | 99.00th=[15533], 99.50th=[16319], 99.90th=[17171], 99.95th=[17433], 00:27:29.813 | 99.99th=[17957] 00:27:29.813 bw ( KiB/s): min=59360, max=81312, per=52.07%, avg=68664.00, stdev=9273.43, samples=4 00:27:29.813 iops : min= 3710, max= 5082, avg=4291.50, stdev=579.59, samples=4 00:27:29.813 write: IOPS=4990, BW=78.0MiB/s (81.8MB/s)(141MiB/1803msec); 0 zone resets 00:27:29.813 slat (usec): min=30, max=124, avg=33.34, stdev= 4.71 00:27:29.813 clat (usec): min=3571, max=19740, avg=10921.18, stdev=2111.56 00:27:29.813 lat (usec): min=3603, max=19773, avg=10954.51, stdev=2111.60 00:27:29.813 clat percentiles (usec): 00:27:29.813 | 1.00th=[ 7308], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 9110], 00:27:29.813 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[11076], 00:27:29.813 | 70.00th=[11731], 80.00th=[12518], 90.00th=[13960], 95.00th=[14877], 00:27:29.813 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17957], 99.95th=[17957], 00:27:29.813 | 99.99th=[19792] 00:27:29.813 bw ( KiB/s): min=61728, max=84256, per=89.60%, avg=71544.00, stdev=9413.49, samples=4 00:27:29.813 iops : min= 3858, max= 5266, avg=4471.50, stdev=588.34, samples=4 00:27:29.813 lat (msec) : 4=0.15%, 10=54.94%, 20=44.91% 00:27:29.813 cpu : usr=75.20%, sys=21.21%, ctx=34, majf=0, minf=60 00:27:29.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:29.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:29.813 issued rwts: total=16524,8998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.813 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:29.813 00:27:29.813 Run status group 0 (all jobs): 00:27:29.813 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=258MiB (271MB), run=2005-2005msec 00:27:29.813 WRITE: bw=78.0MiB/s (81.8MB/s), 78.0MiB/s-78.0MiB/s (81.8MB/s-81.8MB/s), io=141MiB (147MB), run=1803-1803msec 00:27:29.813 10:57:46 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:29.813 10:57:46 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:29.813 10:57:46 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:29.813 10:57:46 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:29.813 10:57:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:29.813 10:57:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:29.813 10:57:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:29.813 10:57:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:29.813 10:57:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:30.071 10:57:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:30.071 10:57:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:30.071 10:57:46 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:27:33.348 Nvme0n1 00:27:33.348 10:57:49 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:35.871 10:57:52 -- host/fio.sh@53 -- # ls_guid=ae570bb3-967a-42a8-a6e4-c345f370bcd5 00:27:35.871 10:57:52 -- host/fio.sh@54 -- # get_lvs_free_mb ae570bb3-967a-42a8-a6e4-c345f370bcd5 00:27:35.871 10:57:52 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ae570bb3-967a-42a8-a6e4-c345f370bcd5 00:27:35.871 10:57:52 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:35.871 10:57:52 -- common/autotest_common.sh@1345 -- # local fc 00:27:35.871 10:57:52 -- common/autotest_common.sh@1346 -- # local cs 00:27:35.872 10:57:52 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:36.130 10:57:52 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:36.130 { 00:27:36.130 "uuid": "ae570bb3-967a-42a8-a6e4-c345f370bcd5", 00:27:36.130 "name": "lvs_0", 00:27:36.130 "base_bdev": "Nvme0n1", 00:27:36.130 "total_data_clusters": 930, 00:27:36.130 "free_clusters": 930, 00:27:36.130 "block_size": 512, 00:27:36.130 "cluster_size": 1073741824 00:27:36.130 } 00:27:36.130 ]' 00:27:36.130 10:57:52 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ae570bb3-967a-42a8-a6e4-c345f370bcd5") .free_clusters' 00:27:36.130 10:57:52 -- common/autotest_common.sh@1348 -- # fc=930 00:27:36.130 10:57:52 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ae570bb3-967a-42a8-a6e4-c345f370bcd5") .cluster_size' 00:27:36.130 10:57:52 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:27:36.130 10:57:52 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:27:36.130 10:57:52 -- common/autotest_common.sh@1353 -- # echo 952320 00:27:36.130 952320 00:27:36.130 10:57:52 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:36.711 efb614ff-b210-4a69-96af-63b8f19de6dd 00:27:36.711 10:57:53 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:36.966 10:57:53 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:36.966 10:57:53 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:37.223 10:57:54 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:37.223 10:57:54 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:37.223 10:57:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:37.223 10:57:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:37.223 10:57:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:37.223 10:57:54 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.223 10:57:54 -- common/autotest_common.sh@1320 -- # shift 00:27:37.223 10:57:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:37.223 10:57:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.223 10:57:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.223 10:57:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:37.223 10:57:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:37.223 10:57:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:37.223 10:57:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:37.223 10:57:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.223 10:57:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.223 10:57:54 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:37.223 10:57:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:37.223 10:57:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:37.223 10:57:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:37.223 10:57:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:37.223 10:57:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:37.480 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:37.480 fio-3.35 00:27:37.480 Starting 1 thread 00:27:37.480 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.999 00:27:39.999 test: (groupid=0, jobs=1): err= 0: pid=3559121: Wed Jul 10 10:57:56 2024 00:27:39.999 read: IOPS=6415, BW=25.1MiB/s (26.3MB/s)(50.3MiB/2007msec) 00:27:39.999 slat (nsec): min=1991, max=135965, avg=2481.27, stdev=1702.21 00:27:39.999 clat (usec): min=919, max=170514, avg=11022.07, stdev=11269.66 00:27:39.999 lat (usec): min=921, max=170546, avg=11024.55, stdev=11269.88 00:27:39.999 clat percentiles (msec): 00:27:39.999 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:27:39.999 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:27:39.999 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 12], 00:27:39.999 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:27:39.999 | 99.99th=[ 171] 00:27:39.999 bw ( KiB/s): min=18328, max=28224, per=99.78%, avg=25606.00, stdev=4855.66, samples=4 00:27:39.999 iops : min= 4582, max= 7056, avg=6401.50, stdev=1213.91, samples=4 00:27:39.999 write: IOPS=6414, BW=25.1MiB/s (26.3MB/s)(50.3MiB/2007msec); 0 zone resets 00:27:39.999 slat (usec): min=2, max=105, avg= 2.62, stdev= 1.39 00:27:39.999 clat (usec): min=338, max=168803, avg=8828.57, stdev=10586.59 00:27:39.999 lat (usec): min=341, max=168847, avg=8831.19, stdev=10586.81 00:27:39.999 clat percentiles (msec): 00:27:39.999 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:27:39.999 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:27:39.999 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:27:39.999 | 99.00th=[ 10], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:27:39.999 | 99.99th=[ 169] 00:27:39.999 bw ( KiB/s): min=19304, max=27888, per=99.93%, avg=25640.00, stdev=4225.16, samples=4 00:27:39.999 iops : min= 4826, max= 6972, avg=6410.00, stdev=1056.29, samples=4 00:27:39.999 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:27:39.999 lat (msec) : 2=0.03%, 4=0.20%, 10=68.90%, 20=30.36%, 250=0.50% 00:27:39.999 cpu : usr=59.42%, sys=36.94%, ctx=84, majf=0, minf=32 00:27:39.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:39.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:39.999 issued rwts: total=12876,12874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:39.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:39.999 00:27:39.999 Run status group 0 (all jobs): 00:27:39.999 READ: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.3MiB (52.7MB), run=2007-2007msec 00:27:39.999 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.3MiB (52.7MB), run=2007-2007msec 00:27:39.999 10:57:56 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:40.257 10:57:56 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:41.186 10:57:57 -- host/fio.sh@64 -- # ls_nested_guid=65ceef65-2757-4d6a-ac0d-634dc9fdd9f7 00:27:41.186 10:57:57 -- host/fio.sh@65 -- # get_lvs_free_mb 65ceef65-2757-4d6a-ac0d-634dc9fdd9f7 00:27:41.186 10:57:57 -- common/autotest_common.sh@1343 -- # local lvs_uuid=65ceef65-2757-4d6a-ac0d-634dc9fdd9f7 00:27:41.186 10:57:57 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:41.186 10:57:57 -- common/autotest_common.sh@1345 -- # local fc 00:27:41.186 10:57:57 -- common/autotest_common.sh@1346 -- # local cs 00:27:41.186 10:57:57 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:41.441 10:57:58 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:41.442 { 00:27:41.442 "uuid": "ae570bb3-967a-42a8-a6e4-c345f370bcd5", 00:27:41.442 "name": "lvs_0", 00:27:41.442 "base_bdev": "Nvme0n1", 00:27:41.442 "total_data_clusters": 930, 00:27:41.442 "free_clusters": 0, 00:27:41.442 "block_size": 512, 00:27:41.442 "cluster_size": 1073741824 00:27:41.442 }, 00:27:41.442 { 00:27:41.442 "uuid": "65ceef65-2757-4d6a-ac0d-634dc9fdd9f7", 00:27:41.442 "name": "lvs_n_0", 00:27:41.442 "base_bdev": "efb614ff-b210-4a69-96af-63b8f19de6dd", 00:27:41.442 "total_data_clusters": 237847, 00:27:41.442 "free_clusters": 237847, 00:27:41.442 "block_size": 512, 00:27:41.442 "cluster_size": 4194304 00:27:41.442 } 00:27:41.442 ]' 00:27:41.442 10:57:58 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="65ceef65-2757-4d6a-ac0d-634dc9fdd9f7") .free_clusters' 00:27:41.442 10:57:58 -- common/autotest_common.sh@1348 -- # fc=237847 00:27:41.442 10:57:58 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="65ceef65-2757-4d6a-ac0d-634dc9fdd9f7") .cluster_size' 00:27:41.442 10:57:58 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:41.442 10:57:58 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:27:41.442 10:57:58 -- common/autotest_common.sh@1353 -- # echo 951388 00:27:41.442 951388 00:27:41.442 10:57:58 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:42.372 799b1991-a2eb-469b-abb9-3d1799159606 00:27:42.372 10:57:58 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:42.372 10:57:59 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:42.629 10:57:59 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:42.886 10:57:59 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:42.886 10:57:59 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:42.886 10:57:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:42.886 10:57:59 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:42.886 10:57:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:42.886 10:57:59 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:42.886 10:57:59 -- common/autotest_common.sh@1320 -- # shift 00:27:42.886 10:57:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:42.886 10:57:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.886 10:57:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:42.886 10:57:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:42.886 10:57:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:42.886 10:57:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:42.886 10:57:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:42.886 10:57:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.886 10:57:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:42.886 10:57:59 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:42.886 10:57:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:42.886 10:57:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:42.886 10:57:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:42.886 10:57:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:42.886 10:57:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:43.143 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:43.143 fio-3.35 00:27:43.143 Starting 1 thread 00:27:43.143 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.721 00:27:45.721 test: (groupid=0, jobs=1): err= 0: pid=3559866: Wed Jul 10 10:58:02 2024 00:27:45.721 read: IOPS=6065, BW=23.7MiB/s (24.8MB/s)(47.6MiB/2009msec) 00:27:45.721 slat (nsec): min=1973, max=137689, avg=2473.24, stdev=1801.79 00:27:45.721 clat (usec): min=4445, max=20286, avg=11705.15, stdev=983.01 00:27:45.721 lat (usec): min=4450, max=20288, avg=11707.62, stdev=982.93 00:27:45.721 clat percentiles (usec): 00:27:45.721 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:27:45.721 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:27:45.721 | 70.00th=[12125], 80.00th=[12518], 90.00th=[12911], 95.00th=[13173], 00:27:45.721 | 99.00th=[13960], 99.50th=[14091], 99.90th=[18482], 99.95th=[18744], 00:27:45.721 | 99.99th=[19792] 00:27:45.721 bw ( KiB/s): min=23048, max=24856, per=99.85%, avg=24224.00, stdev=802.82, samples=4 00:27:45.721 iops : min= 5762, max= 6214, avg=6056.00, stdev=200.71, samples=4 00:27:45.721 write: IOPS=6045, BW=23.6MiB/s (24.8MB/s)(47.4MiB/2009msec); 0 zone resets 00:27:45.721 slat (usec): min=2, max=108, avg= 2.61, stdev= 1.39 00:27:45.721 clat (usec): min=2105, max=18501, avg=9290.06, stdev=867.45 00:27:45.721 lat (usec): min=2111, max=18504, avg=9292.67, stdev=867.41 00:27:45.721 clat percentiles (usec): 00:27:45.721 | 1.00th=[ 7242], 5.00th=[ 7963], 10.00th=[ 8291], 20.00th=[ 8586], 00:27:45.721 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:27:45.721 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10683], 00:27:45.721 | 99.00th=[11207], 99.50th=[11469], 99.90th=[14091], 99.95th=[15926], 00:27:45.721 | 99.99th=[17433] 00:27:45.721 bw ( KiB/s): min=24016, max=24256, per=100.00%, avg=24180.00, stdev=113.42, samples=4 00:27:45.721 iops : min= 6004, max= 6064, avg=6045.00, stdev=28.35, samples=4 00:27:45.721 lat (msec) : 4=0.04%, 10=42.43%, 20=57.52%, 50=0.01% 00:27:45.721 cpu : usr=56.37%, sys=39.74%, ctx=109, majf=0, minf=32 00:27:45.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:45.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:45.722 issued rwts: total=12185,12145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:45.722 00:27:45.722 Run status group 0 (all jobs): 00:27:45.722 READ: bw=23.7MiB/s (24.8MB/s), 23.7MiB/s-23.7MiB/s (24.8MB/s-24.8MB/s), io=47.6MiB (49.9MB), run=2009-2009msec 00:27:45.722 WRITE: bw=23.6MiB/s (24.8MB/s), 23.6MiB/s-23.6MiB/s (24.8MB/s-24.8MB/s), io=47.4MiB (49.7MB), run=2009-2009msec 00:27:45.722 10:58:02 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:45.722 10:58:02 -- host/fio.sh@74 -- # sync 00:27:45.722 10:58:02 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:49.899 10:58:06 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:49.899 10:58:06 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:53.175 10:58:09 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:53.175 10:58:09 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:55.073 10:58:11 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:55.073 10:58:11 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:55.073 10:58:11 -- host/fio.sh@86 -- # nvmftestfini 00:27:55.073 10:58:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:55.073 10:58:11 -- nvmf/common.sh@116 -- # sync 00:27:55.073 10:58:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:55.073 10:58:11 -- nvmf/common.sh@119 -- # set +e 00:27:55.073 10:58:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:55.073 10:58:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:55.073 rmmod nvme_tcp 00:27:55.073 rmmod nvme_fabrics 00:27:55.073 rmmod nvme_keyring 00:27:55.073 10:58:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:55.073 10:58:11 -- nvmf/common.sh@123 -- # set -e 00:27:55.073 10:58:11 -- nvmf/common.sh@124 -- # return 0 00:27:55.073 10:58:11 -- nvmf/common.sh@477 -- # '[' -n 3556960 ']' 00:27:55.073 10:58:11 -- nvmf/common.sh@478 -- # killprocess 3556960 00:27:55.073 10:58:11 -- common/autotest_common.sh@926 -- # '[' -z 3556960 ']' 00:27:55.073 10:58:11 -- common/autotest_common.sh@930 -- # kill -0 3556960 00:27:55.073 10:58:11 -- common/autotest_common.sh@931 -- # uname 00:27:55.073 10:58:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:55.073 10:58:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3556960 00:27:55.073 10:58:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:55.073 10:58:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:55.073 10:58:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3556960' 00:27:55.073 killing process with pid 3556960 00:27:55.073 10:58:11 -- common/autotest_common.sh@945 -- # kill 3556960 00:27:55.073 10:58:11 -- common/autotest_common.sh@950 -- # wait 3556960 00:27:55.073 10:58:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:55.073 10:58:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:55.073 10:58:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:55.073 10:58:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:55.073 10:58:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:55.073 10:58:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.073 10:58:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.073 10:58:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.607 10:58:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:57.607 00:27:57.607 real 0m37.218s 00:27:57.607 user 2m21.319s 00:27:57.607 sys 0m7.354s 00:27:57.607 10:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.607 10:58:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.607 ************************************ 00:27:57.607 END TEST nvmf_fio_host 00:27:57.607 ************************************ 00:27:57.607 10:58:13 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:57.607 10:58:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:57.607 10:58:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:57.607 10:58:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.607 ************************************ 00:27:57.607 START TEST nvmf_failover 00:27:57.607 ************************************ 00:27:57.607 10:58:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:57.607 * Looking for test storage... 00:27:57.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:57.607 10:58:13 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.607 10:58:13 -- nvmf/common.sh@7 -- # uname -s 00:27:57.607 10:58:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.607 10:58:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.607 10:58:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.607 10:58:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.607 10:58:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.607 10:58:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.607 10:58:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.607 10:58:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.607 10:58:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.607 10:58:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.607 10:58:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.607 10:58:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:57.607 10:58:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.607 10:58:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.607 10:58:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.607 10:58:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.607 10:58:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.607 10:58:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.607 10:58:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.607 10:58:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.607 10:58:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.607 10:58:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.607 10:58:13 -- paths/export.sh@5 -- # export PATH 00:27:57.608 10:58:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.608 10:58:13 -- nvmf/common.sh@46 -- # : 0 00:27:57.608 10:58:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:57.608 10:58:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:57.608 10:58:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:57.608 10:58:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.608 10:58:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.608 10:58:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:57.608 10:58:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:57.608 10:58:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:57.608 10:58:13 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:57.608 10:58:13 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:57.608 10:58:13 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:57.608 10:58:13 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:57.608 10:58:13 -- host/failover.sh@18 -- # nvmftestinit 00:27:57.608 10:58:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:57.608 10:58:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.608 10:58:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:57.608 10:58:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:57.608 10:58:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:57.608 10:58:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.608 10:58:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.608 10:58:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.608 10:58:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:57.608 10:58:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:57.608 10:58:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:57.608 10:58:13 -- common/autotest_common.sh@10 -- # set +x 00:27:59.526 10:58:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:59.526 10:58:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:59.526 10:58:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:59.526 10:58:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:59.526 10:58:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:59.526 10:58:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:59.526 10:58:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:59.526 10:58:15 -- nvmf/common.sh@294 -- # net_devs=() 00:27:59.526 10:58:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:59.526 10:58:15 -- nvmf/common.sh@295 -- # e810=() 00:27:59.526 10:58:15 -- nvmf/common.sh@295 -- # local -ga e810 00:27:59.526 10:58:15 -- nvmf/common.sh@296 -- # x722=() 00:27:59.526 10:58:15 -- nvmf/common.sh@296 -- # local -ga x722 00:27:59.526 10:58:15 -- nvmf/common.sh@297 -- # mlx=() 00:27:59.526 10:58:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:59.526 10:58:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.526 10:58:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.526 10:58:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.526 10:58:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.526 10:58:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.526 10:58:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.526 10:58:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.526 10:58:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.526 10:58:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.526 10:58:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.526 10:58:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.526 10:58:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:59.526 10:58:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:59.526 10:58:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:59.526 10:58:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:59.526 10:58:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:59.526 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:59.526 10:58:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:59.526 10:58:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:59.526 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:59.526 10:58:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:59.526 10:58:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:59.526 10:58:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.526 10:58:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:59.526 10:58:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.526 10:58:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:59.526 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:59.526 10:58:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.526 10:58:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:59.526 10:58:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.526 10:58:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:59.526 10:58:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.526 10:58:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:59.526 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:59.526 10:58:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.526 10:58:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:59.526 10:58:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:59.526 10:58:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:59.526 10:58:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:59.526 10:58:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.526 10:58:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.526 10:58:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.526 10:58:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:59.526 10:58:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.526 10:58:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.526 10:58:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:59.526 10:58:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.526 10:58:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.526 10:58:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:59.526 10:58:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:59.526 10:58:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.526 10:58:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.526 10:58:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.526 10:58:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.527 10:58:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:59.527 10:58:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.527 10:58:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.527 10:58:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.527 10:58:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:59.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:27:59.527 00:27:59.527 --- 10.0.0.2 ping statistics --- 00:27:59.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.527 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:27:59.527 10:58:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:27:59.527 00:27:59.527 --- 10.0.0.1 ping statistics --- 00:27:59.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.527 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:27:59.527 10:58:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.527 10:58:16 -- nvmf/common.sh@410 -- # return 0 00:27:59.527 10:58:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:59.527 10:58:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.527 10:58:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:59.527 10:58:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:59.527 10:58:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.527 10:58:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:59.527 10:58:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:59.527 10:58:16 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:59.527 10:58:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:59.527 10:58:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:59.527 10:58:16 -- common/autotest_common.sh@10 -- # set +x 00:27:59.527 10:58:16 -- nvmf/common.sh@469 -- # nvmfpid=3563171 00:27:59.527 10:58:16 -- nvmf/common.sh@470 -- # waitforlisten 3563171 00:27:59.527 10:58:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:59.527 10:58:16 -- common/autotest_common.sh@819 -- # '[' -z 3563171 ']' 00:27:59.527 10:58:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.527 10:58:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:59.527 10:58:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.527 10:58:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:59.527 10:58:16 -- common/autotest_common.sh@10 -- # set +x 00:27:59.527 [2024-07-10 10:58:16.116389] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:59.527 [2024-07-10 10:58:16.116503] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.527 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.527 [2024-07-10 10:58:16.189920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:59.527 [2024-07-10 10:58:16.278180] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:59.527 [2024-07-10 10:58:16.278349] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.527 [2024-07-10 10:58:16.278369] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.527 [2024-07-10 10:58:16.278384] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.527 [2024-07-10 10:58:16.278471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.527 [2024-07-10 10:58:16.278591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.527 [2024-07-10 10:58:16.278594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.459 10:58:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:00.459 10:58:17 -- common/autotest_common.sh@852 -- # return 0 00:28:00.459 10:58:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:00.459 10:58:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:00.459 10:58:17 -- common/autotest_common.sh@10 -- # set +x 00:28:00.459 10:58:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.459 10:58:17 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:00.459 [2024-07-10 10:58:17.271484] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.717 10:58:17 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:00.717 Malloc0 00:28:00.975 10:58:17 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:01.233 10:58:17 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:01.233 10:58:18 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.491 [2024-07-10 10:58:18.247561] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.491 10:58:18 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:01.749 [2024-07-10 10:58:18.492245] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:01.749 10:58:18 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:02.007 [2024-07-10 10:58:18.717039] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:02.007 10:58:18 -- host/failover.sh@31 -- # bdevperf_pid=3563478 00:28:02.007 10:58:18 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:02.007 10:58:18 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:02.007 10:58:18 -- host/failover.sh@34 -- # waitforlisten 3563478 /var/tmp/bdevperf.sock 00:28:02.007 10:58:18 -- common/autotest_common.sh@819 -- # '[' -z 3563478 ']' 00:28:02.007 10:58:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:02.007 10:58:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:02.007 10:58:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:02.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:02.007 10:58:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:02.007 10:58:18 -- common/autotest_common.sh@10 -- # set +x 00:28:02.939 10:58:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:02.939 10:58:19 -- common/autotest_common.sh@852 -- # return 0 00:28:02.939 10:58:19 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:03.196 NVMe0n1 00:28:03.196 10:58:20 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:03.761 00:28:03.761 10:58:20 -- host/failover.sh@39 -- # run_test_pid=3563739 00:28:03.761 10:58:20 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:03.761 10:58:20 -- host/failover.sh@41 -- # sleep 1 00:28:04.695 10:58:21 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.954 [2024-07-10 10:58:21.545085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.954 [2024-07-10 10:58:21.545792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.955 [2024-07-10 10:58:21.545804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.955 [2024-07-10 10:58:21.545819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.955 [2024-07-10 10:58:21.545831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.955 [2024-07-10 10:58:21.545843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.955 [2024-07-10 10:58:21.545854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.955 [2024-07-10 10:58:21.545866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.955 [2024-07-10 10:58:21.545877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.955 [2024-07-10 10:58:21.545889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.955 [2024-07-10 10:58:21.545901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.955 [2024-07-10 10:58:21.545912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162f8d0 is same with the state(5) to be set 00:28:04.955 10:58:21 -- host/failover.sh@45 -- # sleep 3 00:28:08.254 10:58:24 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:08.254 00:28:08.254 10:58:24 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:08.512 [2024-07-10 10:58:25.191681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.191978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.192006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.192018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.192030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.192043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.192055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.512 [2024-07-10 10:58:25.192067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 [2024-07-10 10:58:25.192555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1630e20 is same with the state(5) to be set 00:28:08.513 10:58:25 -- host/failover.sh@50 -- # sleep 3 00:28:11.792 10:58:28 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.792 [2024-07-10 10:58:28.476988] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.792 10:58:28 -- host/failover.sh@55 -- # sleep 1 00:28:12.725 10:58:29 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:12.982 [2024-07-10 10:58:29.763750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.763993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.764005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.764016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.764028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.764040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.764051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.764063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.764089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.764102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.764116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.982 [2024-07-10 10:58:29.764128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 [2024-07-10 10:58:29.764472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631f00 is same with the state(5) to be set 00:28:12.983 10:58:29 -- host/failover.sh@59 -- # wait 3563739 00:28:19.538 0 00:28:19.538 10:58:35 -- host/failover.sh@61 -- # killprocess 3563478 00:28:19.538 10:58:35 -- common/autotest_common.sh@926 -- # '[' -z 3563478 ']' 00:28:19.538 10:58:35 -- common/autotest_common.sh@930 -- # kill -0 3563478 00:28:19.538 10:58:35 -- common/autotest_common.sh@931 -- # uname 00:28:19.538 10:58:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:19.538 10:58:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3563478 00:28:19.538 10:58:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:19.538 10:58:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:19.538 10:58:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3563478' 00:28:19.538 killing process with pid 3563478 00:28:19.538 10:58:35 -- common/autotest_common.sh@945 -- # kill 3563478 00:28:19.538 10:58:35 -- common/autotest_common.sh@950 -- # wait 3563478 00:28:19.538 10:58:35 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:19.538 [2024-07-10 10:58:18.777095] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:19.538 [2024-07-10 10:58:18.777173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3563478 ] 00:28:19.538 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.538 [2024-07-10 10:58:18.836029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.538 [2024-07-10 10:58:18.922209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.538 Running I/O for 15 seconds... 00:28:19.538 [2024-07-10 10:58:21.546279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.538 [2024-07-10 10:58:21.546897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.538 [2024-07-10 10:58:21.546910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.546926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.546939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.546953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.546966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.546980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.546994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.539 [2024-07-10 10:58:21.547163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.539 [2024-07-10 10:58:21.547218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.539 [2024-07-10 10:58:21.547273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.539 [2024-07-10 10:58:21.547817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.539 [2024-07-10 10:58:21.547845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.547981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.547996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.548008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.548022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.548035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.548049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.539 [2024-07-10 10:58:21.548062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.548076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.539 [2024-07-10 10:58:21.548089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.548103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.539 [2024-07-10 10:58:21.548116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.539 [2024-07-10 10:58:21.548130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.548960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.548975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.548988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.549002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.549015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.549029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.549042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.549056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.549069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.549083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.549096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.549110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.549123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.549137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.549150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.549164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.549178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.549192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.540 [2024-07-10 10:58:21.549209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.549224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.540 [2024-07-10 10:58:21.549237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.540 [2024-07-10 10:58:21.549251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.541 [2024-07-10 10:58:21.549909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.549978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.549991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.550005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.550018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.550039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.550053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.550067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.550080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.550095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:21.550107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.550121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1710 is same with the state(5) to be set 00:28:19.541 [2024-07-10 10:58:21.550136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:19.541 [2024-07-10 10:58:21.550147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:19.541 [2024-07-10 10:58:21.550159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115512 len:8 PRP1 0x0 PRP2 0x0 00:28:19.541 [2024-07-10 10:58:21.550171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.550225] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11e1710 was disconnected and freed. reset controller. 00:28:19.541 [2024-07-10 10:58:21.550248] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:19.541 [2024-07-10 10:58:21.550281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.541 [2024-07-10 10:58:21.550315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.550331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.541 [2024-07-10 10:58:21.550344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.550357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.541 [2024-07-10 10:58:21.550370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.550383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.541 [2024-07-10 10:58:21.550397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:21.550420] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.541 [2024-07-10 10:58:21.550464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c20a0 (9): Bad file descriptor 00:28:19.541 [2024-07-10 10:58:21.552741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.541 [2024-07-10 10:58:21.588090] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:19.541 [2024-07-10 10:58:25.192698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:25.192752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:25.192791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:25.192808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:25.192825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:25.192840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:25.192855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:25.192869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.541 [2024-07-10 10:58:25.192885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.541 [2024-07-10 10:58:25.192898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.192914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.192928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.192959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.192973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.192988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.542 [2024-07-10 10:58:25.193694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.542 [2024-07-10 10:58:25.193730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.542 [2024-07-10 10:58:25.193743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.193758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.193772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.193787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.193801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.193815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.193829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.193843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.193856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.193871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.193884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.193899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.193912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.193927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.193944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.193959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.193972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.193987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.194001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.194057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.194464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.194580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.194609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.194637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.194695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.194743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.543 [2024-07-10 10:58:25.194772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.194975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.194990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.195003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.543 [2024-07-10 10:58:25.195018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.543 [2024-07-10 10:58:25.195033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.195960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.195975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.195989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.196004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.196018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.196033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.196046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.196061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.196075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.196090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.544 [2024-07-10 10:58:25.196104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.196119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.196132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.196147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.196161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.544 [2024-07-10 10:58:25.196176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.544 [2024-07-10 10:58:25.196190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-10 10:58:25.196220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:25.196252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:25.196281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-10 10:58:25.196310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:25.196339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:25.196368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-10 10:58:25.196397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:25.196444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:25.196473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:25.196502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:25.196545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:25.196574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:25.196602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:25.196635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ce510 is same with the state(5) to be set 00:28:19.545 [2024-07-10 10:58:25.196665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:19.545 [2024-07-10 10:58:25.196677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:19.545 [2024-07-10 10:58:25.196688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114536 len:8 PRP1 0x0 PRP2 0x0 00:28:19.545 [2024-07-10 10:58:25.196702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196759] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11ce510 was disconnected and freed. reset controller. 00:28:19.545 [2024-07-10 10:58:25.196785] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:19.545 [2024-07-10 10:58:25.196818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.545 [2024-07-10 10:58:25.196836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.545 [2024-07-10 10:58:25.196873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.545 [2024-07-10 10:58:25.196900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.545 [2024-07-10 10:58:25.196926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:25.196939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.545 [2024-07-10 10:58:25.196990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c20a0 (9): Bad file descriptor 00:28:19.545 [2024-07-10 10:58:25.199162] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.545 [2024-07-10 10:58:25.230137] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:19.545 [2024-07-10 10:58:29.764094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.545 [2024-07-10 10:58:29.764138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.545 [2024-07-10 10:58:29.764171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.545 [2024-07-10 10:58:29.764199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.545 [2024-07-10 10:58:29.764227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c20a0 is same with the state(5) to be set 00:28:19.545 [2024-07-10 10:58:29.764644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.764667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.764707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.764737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.764765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.764794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.764823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.764851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.764879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.764923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.764950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.764978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.764993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.765006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.765029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.765044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.765059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.545 [2024-07-10 10:58:29.765072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.545 [2024-07-10 10:58:29.765087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.546 [2024-07-10 10:58:29.765605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.546 [2024-07-10 10:58:29.765866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.765952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.546 [2024-07-10 10:58:29.765981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.765997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.546 [2024-07-10 10:58:29.766010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.766025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.766039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.766054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.546 [2024-07-10 10:58:29.766067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.766083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.546 [2024-07-10 10:58:29.766096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.766111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.766124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.766143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.766156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.766171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.766184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.766199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.766213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.766228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.766241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.766256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.766269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.766284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.766298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.546 [2024-07-10 10:58:29.766312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.546 [2024-07-10 10:58:29.766326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.766383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.766450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.766480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.766571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.766759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.766787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.766843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.766871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.766902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.766973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.766986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.767014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.767041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.767070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.767098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.767125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.767156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.767185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.767214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.547 [2024-07-10 10:58:29.767246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.767275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.767303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.767332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.767360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.547 [2024-07-10 10:58:29.767375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.547 [2024-07-10 10:58:29.767389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.548 [2024-07-10 10:58:29.767418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.548 [2024-07-10 10:58:29.767470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.767500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.548 [2024-07-10 10:58:29.767530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.548 [2024-07-10 10:58:29.767559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.767588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.767618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.767651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.767681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.767711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.767756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.767786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.767816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.548 [2024-07-10 10:58:29.767845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.548 [2024-07-10 10:58:29.767874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.548 [2024-07-10 10:58:29.767910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.548 [2024-07-10 10:58:29.767939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.767968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.767984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.767997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.548 [2024-07-10 10:58:29.768093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.548 [2024-07-10 10:58:29.768148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.548 [2024-07-10 10:58:29.768204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.548 [2024-07-10 10:58:29.768458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e3820 is same with the state(5) to be set 00:28:19.548 [2024-07-10 10:58:29.768490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:19.548 [2024-07-10 10:58:29.768501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:19.548 [2024-07-10 10:58:29.768513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:28:19.548 [2024-07-10 10:58:29.768526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.548 [2024-07-10 10:58:29.768586] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11e3820 was disconnected and freed. reset controller. 00:28:19.548 [2024-07-10 10:58:29.768604] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:19.548 [2024-07-10 10:58:29.768619] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.548 [2024-07-10 10:58:29.770930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.548 [2024-07-10 10:58:29.770969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c20a0 (9): Bad file descriptor 00:28:19.548 [2024-07-10 10:58:29.886938] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:19.548 00:28:19.548 Latency(us) 00:28:19.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.548 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:19.548 Verification LBA range: start 0x0 length 0x4000 00:28:19.548 NVMe0n1 : 15.01 12907.52 50.42 709.41 0.00 9383.57 776.72 16893.72 00:28:19.548 =================================================================================================================== 00:28:19.548 Total : 12907.52 50.42 709.41 0.00 9383.57 776.72 16893.72 00:28:19.548 Received shutdown signal, test time was about 15.000000 seconds 00:28:19.548 00:28:19.548 Latency(us) 00:28:19.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.549 =================================================================================================================== 00:28:19.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.549 10:58:35 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:19.549 10:58:35 -- host/failover.sh@65 -- # count=3 00:28:19.549 10:58:35 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:19.549 10:58:35 -- host/failover.sh@73 -- # bdevperf_pid=3565520 00:28:19.549 10:58:35 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:19.549 10:58:35 -- host/failover.sh@75 -- # waitforlisten 3565520 /var/tmp/bdevperf.sock 00:28:19.549 10:58:35 -- common/autotest_common.sh@819 -- # '[' -z 3565520 ']' 00:28:19.549 10:58:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:19.549 10:58:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:19.549 10:58:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:19.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:19.549 10:58:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:19.549 10:58:35 -- common/autotest_common.sh@10 -- # set +x 00:28:20.113 10:58:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:20.113 10:58:36 -- common/autotest_common.sh@852 -- # return 0 00:28:20.113 10:58:36 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:20.370 [2024-07-10 10:58:36.943908] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:20.370 10:58:36 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:20.371 [2024-07-10 10:58:37.176566] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:20.628 10:58:37 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:20.886 NVMe0n1 00:28:20.886 10:58:37 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:21.144 00:28:21.144 10:58:37 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:21.401 00:28:21.401 10:58:38 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:21.401 10:58:38 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:21.657 10:58:38 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:21.914 10:58:38 -- host/failover.sh@87 -- # sleep 3 00:28:25.195 10:58:41 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:25.195 10:58:41 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:25.195 10:58:41 -- host/failover.sh@90 -- # run_test_pid=3566334 00:28:25.195 10:58:41 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:25.195 10:58:41 -- host/failover.sh@92 -- # wait 3566334 00:28:26.130 0 00:28:26.130 10:58:42 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:26.130 [2024-07-10 10:58:35.780378] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:26.130 [2024-07-10 10:58:35.780503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3565520 ] 00:28:26.130 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.130 [2024-07-10 10:58:35.843602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.130 [2024-07-10 10:58:35.926246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.130 [2024-07-10 10:58:38.566670] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:26.130 [2024-07-10 10:58:38.566756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.130 [2024-07-10 10:58:38.566794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.130 [2024-07-10 10:58:38.566812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.130 [2024-07-10 10:58:38.566826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.130 [2024-07-10 10:58:38.566840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.130 [2024-07-10 10:58:38.566853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.130 [2024-07-10 10:58:38.566868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.130 [2024-07-10 10:58:38.566880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.130 [2024-07-10 10:58:38.566894] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.130 [2024-07-10 10:58:38.566933] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.130 [2024-07-10 10:58:38.566965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa480a0 (9): Bad file descriptor 00:28:26.130 [2024-07-10 10:58:38.578594] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:26.130 Running I/O for 1 seconds... 00:28:26.130 00:28:26.130 Latency(us) 00:28:26.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.130 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:26.130 Verification LBA range: start 0x0 length 0x4000 00:28:26.130 NVMe0n1 : 1.01 12672.53 49.50 0.00 0.00 10056.56 1110.47 16117.00 00:28:26.130 =================================================================================================================== 00:28:26.130 Total : 12672.53 49.50 0.00 0.00 10056.56 1110.47 16117.00 00:28:26.130 10:58:42 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:26.130 10:58:42 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:26.388 10:58:43 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:26.645 10:58:43 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:26.645 10:58:43 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:26.903 10:58:43 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:27.160 10:58:43 -- host/failover.sh@101 -- # sleep 3 00:28:30.438 10:58:46 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:30.438 10:58:46 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:30.438 10:58:47 -- host/failover.sh@108 -- # killprocess 3565520 00:28:30.438 10:58:47 -- common/autotest_common.sh@926 -- # '[' -z 3565520 ']' 00:28:30.438 10:58:47 -- common/autotest_common.sh@930 -- # kill -0 3565520 00:28:30.438 10:58:47 -- common/autotest_common.sh@931 -- # uname 00:28:30.438 10:58:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:30.438 10:58:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3565520 00:28:30.438 10:58:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:30.438 10:58:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:30.438 10:58:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3565520' 00:28:30.438 killing process with pid 3565520 00:28:30.438 10:58:47 -- common/autotest_common.sh@945 -- # kill 3565520 00:28:30.438 10:58:47 -- common/autotest_common.sh@950 -- # wait 3565520 00:28:30.717 10:58:47 -- host/failover.sh@110 -- # sync 00:28:30.717 10:58:47 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.995 10:58:47 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:30.995 10:58:47 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:30.995 10:58:47 -- host/failover.sh@116 -- # nvmftestfini 00:28:30.995 10:58:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:30.995 10:58:47 -- nvmf/common.sh@116 -- # sync 00:28:30.995 10:58:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:30.995 10:58:47 -- nvmf/common.sh@119 -- # set +e 00:28:30.995 10:58:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:30.995 10:58:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:30.995 rmmod nvme_tcp 00:28:30.995 rmmod nvme_fabrics 00:28:30.995 rmmod nvme_keyring 00:28:30.995 10:58:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:30.995 10:58:47 -- nvmf/common.sh@123 -- # set -e 00:28:30.995 10:58:47 -- nvmf/common.sh@124 -- # return 0 00:28:30.995 10:58:47 -- nvmf/common.sh@477 -- # '[' -n 3563171 ']' 00:28:30.995 10:58:47 -- nvmf/common.sh@478 -- # killprocess 3563171 00:28:30.995 10:58:47 -- common/autotest_common.sh@926 -- # '[' -z 3563171 ']' 00:28:30.995 10:58:47 -- common/autotest_common.sh@930 -- # kill -0 3563171 00:28:30.996 10:58:47 -- common/autotest_common.sh@931 -- # uname 00:28:30.996 10:58:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:30.996 10:58:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3563171 00:28:30.996 10:58:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:30.996 10:58:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:30.996 10:58:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3563171' 00:28:30.996 killing process with pid 3563171 00:28:30.996 10:58:47 -- common/autotest_common.sh@945 -- # kill 3563171 00:28:30.996 10:58:47 -- common/autotest_common.sh@950 -- # wait 3563171 00:28:31.255 10:58:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:31.255 10:58:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:31.255 10:58:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:31.255 10:58:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.255 10:58:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:31.255 10:58:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.255 10:58:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.255 10:58:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.787 10:58:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:33.787 00:28:33.787 real 0m36.133s 00:28:33.787 user 2m7.203s 00:28:33.787 sys 0m6.100s 00:28:33.787 10:58:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.787 10:58:50 -- common/autotest_common.sh@10 -- # set +x 00:28:33.787 ************************************ 00:28:33.787 END TEST nvmf_failover 00:28:33.787 ************************************ 00:28:33.787 10:58:50 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:33.787 10:58:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:33.787 10:58:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:33.787 10:58:50 -- common/autotest_common.sh@10 -- # set +x 00:28:33.787 ************************************ 00:28:33.787 START TEST nvmf_discovery 00:28:33.787 ************************************ 00:28:33.787 10:58:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:33.787 * Looking for test storage... 00:28:33.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.787 10:58:50 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.787 10:58:50 -- nvmf/common.sh@7 -- # uname -s 00:28:33.787 10:58:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.787 10:58:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.787 10:58:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.787 10:58:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.787 10:58:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.787 10:58:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.787 10:58:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.787 10:58:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.787 10:58:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.787 10:58:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.787 10:58:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.787 10:58:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.787 10:58:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.787 10:58:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.787 10:58:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.787 10:58:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.787 10:58:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.787 10:58:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.787 10:58:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.787 10:58:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.788 10:58:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.788 10:58:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.788 10:58:50 -- paths/export.sh@5 -- # export PATH 00:28:33.788 10:58:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.788 10:58:50 -- nvmf/common.sh@46 -- # : 0 00:28:33.788 10:58:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:33.788 10:58:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:33.788 10:58:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:33.788 10:58:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.788 10:58:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.788 10:58:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:33.788 10:58:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:33.788 10:58:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:33.788 10:58:50 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:33.788 10:58:50 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:33.788 10:58:50 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:33.788 10:58:50 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:33.788 10:58:50 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:33.788 10:58:50 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:33.788 10:58:50 -- host/discovery.sh@25 -- # nvmftestinit 00:28:33.788 10:58:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:33.788 10:58:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.788 10:58:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:33.788 10:58:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:33.788 10:58:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:33.788 10:58:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.788 10:58:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.788 10:58:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.788 10:58:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:33.788 10:58:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:33.788 10:58:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:33.788 10:58:50 -- common/autotest_common.sh@10 -- # set +x 00:28:35.690 10:58:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:35.690 10:58:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:35.690 10:58:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:35.690 10:58:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:35.690 10:58:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:35.690 10:58:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:35.690 10:58:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:35.690 10:58:52 -- nvmf/common.sh@294 -- # net_devs=() 00:28:35.690 10:58:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:35.690 10:58:52 -- nvmf/common.sh@295 -- # e810=() 00:28:35.690 10:58:52 -- nvmf/common.sh@295 -- # local -ga e810 00:28:35.690 10:58:52 -- nvmf/common.sh@296 -- # x722=() 00:28:35.690 10:58:52 -- nvmf/common.sh@296 -- # local -ga x722 00:28:35.690 10:58:52 -- nvmf/common.sh@297 -- # mlx=() 00:28:35.690 10:58:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:35.690 10:58:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:35.690 10:58:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:35.690 10:58:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:35.690 10:58:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:35.690 10:58:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:35.690 10:58:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:35.690 10:58:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:35.690 10:58:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:35.690 10:58:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:35.690 10:58:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:35.690 10:58:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:35.690 10:58:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:35.690 10:58:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:35.690 10:58:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:35.690 10:58:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:35.690 10:58:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:35.690 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:35.690 10:58:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:35.690 10:58:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:35.690 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:35.690 10:58:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:35.690 10:58:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:35.690 10:58:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.690 10:58:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:35.690 10:58:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.690 10:58:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:35.690 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:35.690 10:58:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.690 10:58:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:35.690 10:58:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.690 10:58:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:35.690 10:58:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.690 10:58:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:35.690 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:35.690 10:58:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.690 10:58:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:35.690 10:58:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:35.690 10:58:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:35.690 10:58:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.690 10:58:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.690 10:58:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.690 10:58:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:35.690 10:58:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.690 10:58:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.690 10:58:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:35.690 10:58:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.690 10:58:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.690 10:58:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:35.690 10:58:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:35.690 10:58:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.690 10:58:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.690 10:58:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.690 10:58:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.690 10:58:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:35.690 10:58:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.690 10:58:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.690 10:58:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.690 10:58:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:35.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:28:35.690 00:28:35.690 --- 10.0.0.2 ping statistics --- 00:28:35.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.690 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:35.690 10:58:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:28:35.690 00:28:35.690 --- 10.0.0.1 ping statistics --- 00:28:35.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.690 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:28:35.690 10:58:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.690 10:58:52 -- nvmf/common.sh@410 -- # return 0 00:28:35.690 10:58:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:35.690 10:58:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.690 10:58:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:35.690 10:58:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.690 10:58:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:35.690 10:58:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:35.690 10:58:52 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:35.690 10:58:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:35.690 10:58:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:35.690 10:58:52 -- common/autotest_common.sh@10 -- # set +x 00:28:35.690 10:58:52 -- nvmf/common.sh@469 -- # nvmfpid=3568966 00:28:35.690 10:58:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:35.690 10:58:52 -- nvmf/common.sh@470 -- # waitforlisten 3568966 00:28:35.690 10:58:52 -- common/autotest_common.sh@819 -- # '[' -z 3568966 ']' 00:28:35.690 10:58:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.690 10:58:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:35.690 10:58:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.690 10:58:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:35.690 10:58:52 -- common/autotest_common.sh@10 -- # set +x 00:28:35.690 [2024-07-10 10:58:52.369600] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:35.690 [2024-07-10 10:58:52.369669] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.690 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.690 [2024-07-10 10:58:52.435723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.949 [2024-07-10 10:58:52.523812] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:35.949 [2024-07-10 10:58:52.523963] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.949 [2024-07-10 10:58:52.523984] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.949 [2024-07-10 10:58:52.523998] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.949 [2024-07-10 10:58:52.524035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.516 10:58:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:36.516 10:58:53 -- common/autotest_common.sh@852 -- # return 0 00:28:36.516 10:58:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:36.516 10:58:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:36.516 10:58:53 -- common/autotest_common.sh@10 -- # set +x 00:28:36.516 10:58:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.516 10:58:53 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:36.516 10:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.516 10:58:53 -- common/autotest_common.sh@10 -- # set +x 00:28:36.516 [2024-07-10 10:58:53.328257] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.516 10:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.516 10:58:53 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:36.516 10:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.516 10:58:53 -- common/autotest_common.sh@10 -- # set +x 00:28:36.516 [2024-07-10 10:58:53.336436] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:36.516 10:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.516 10:58:53 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:36.516 10:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.516 10:58:53 -- common/autotest_common.sh@10 -- # set +x 00:28:36.775 null0 00:28:36.775 10:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.775 10:58:53 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:36.775 10:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.775 10:58:53 -- common/autotest_common.sh@10 -- # set +x 00:28:36.775 null1 00:28:36.775 10:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.775 10:58:53 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:36.775 10:58:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.775 10:58:53 -- common/autotest_common.sh@10 -- # set +x 00:28:36.775 10:58:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.775 10:58:53 -- host/discovery.sh@45 -- # hostpid=3569123 00:28:36.775 10:58:53 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:36.775 10:58:53 -- host/discovery.sh@46 -- # waitforlisten 3569123 /tmp/host.sock 00:28:36.775 10:58:53 -- common/autotest_common.sh@819 -- # '[' -z 3569123 ']' 00:28:36.775 10:58:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:36.775 10:58:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:36.775 10:58:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:36.775 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:36.775 10:58:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:36.775 10:58:53 -- common/autotest_common.sh@10 -- # set +x 00:28:36.775 [2024-07-10 10:58:53.405461] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:36.775 [2024-07-10 10:58:53.405541] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3569123 ] 00:28:36.775 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.775 [2024-07-10 10:58:53.465742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.775 [2024-07-10 10:58:53.554202] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:36.775 [2024-07-10 10:58:53.554383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.709 10:58:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:37.709 10:58:54 -- common/autotest_common.sh@852 -- # return 0 00:28:37.709 10:58:54 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:37.709 10:58:54 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:37.709 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.709 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.709 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.709 10:58:54 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:37.709 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.709 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.709 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.709 10:58:54 -- host/discovery.sh@72 -- # notify_id=0 00:28:37.709 10:58:54 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:37.709 10:58:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:37.709 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.709 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.709 10:58:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:37.709 10:58:54 -- host/discovery.sh@59 -- # sort 00:28:37.709 10:58:54 -- host/discovery.sh@59 -- # xargs 00:28:37.709 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.709 10:58:54 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:37.709 10:58:54 -- host/discovery.sh@79 -- # get_bdev_list 00:28:37.709 10:58:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.709 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.709 10:58:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:37.709 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.709 10:58:54 -- host/discovery.sh@55 -- # sort 00:28:37.709 10:58:54 -- host/discovery.sh@55 -- # xargs 00:28:37.709 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.709 10:58:54 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:37.709 10:58:54 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:37.709 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.709 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.709 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.709 10:58:54 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:37.709 10:58:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:37.709 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.709 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.709 10:58:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:37.709 10:58:54 -- host/discovery.sh@59 -- # sort 00:28:37.709 10:58:54 -- host/discovery.sh@59 -- # xargs 00:28:37.709 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.709 10:58:54 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:37.709 10:58:54 -- host/discovery.sh@83 -- # get_bdev_list 00:28:37.709 10:58:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.709 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.709 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.709 10:58:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:37.709 10:58:54 -- host/discovery.sh@55 -- # sort 00:28:37.709 10:58:54 -- host/discovery.sh@55 -- # xargs 00:28:37.968 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.968 10:58:54 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:37.968 10:58:54 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:37.968 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.968 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.968 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.968 10:58:54 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:37.968 10:58:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:37.968 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.968 10:58:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:37.968 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.968 10:58:54 -- host/discovery.sh@59 -- # sort 00:28:37.968 10:58:54 -- host/discovery.sh@59 -- # xargs 00:28:37.968 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.968 10:58:54 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:37.968 10:58:54 -- host/discovery.sh@87 -- # get_bdev_list 00:28:37.968 10:58:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.968 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.968 10:58:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:37.968 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.968 10:58:54 -- host/discovery.sh@55 -- # sort 00:28:37.968 10:58:54 -- host/discovery.sh@55 -- # xargs 00:28:37.968 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.968 10:58:54 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:37.968 10:58:54 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:37.968 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.968 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.968 [2024-07-10 10:58:54.660083] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.968 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.968 10:58:54 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:37.968 10:58:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:37.968 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.968 10:58:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:37.968 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.968 10:58:54 -- host/discovery.sh@59 -- # sort 00:28:37.968 10:58:54 -- host/discovery.sh@59 -- # xargs 00:28:37.968 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.968 10:58:54 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:37.968 10:58:54 -- host/discovery.sh@93 -- # get_bdev_list 00:28:37.968 10:58:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.968 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.968 10:58:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:37.968 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.969 10:58:54 -- host/discovery.sh@55 -- # sort 00:28:37.969 10:58:54 -- host/discovery.sh@55 -- # xargs 00:28:37.969 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.969 10:58:54 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:37.969 10:58:54 -- host/discovery.sh@94 -- # get_notification_count 00:28:37.969 10:58:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:37.969 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.969 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.969 10:58:54 -- host/discovery.sh@74 -- # jq '. | length' 00:28:37.969 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.969 10:58:54 -- host/discovery.sh@74 -- # notification_count=0 00:28:37.969 10:58:54 -- host/discovery.sh@75 -- # notify_id=0 00:28:37.969 10:58:54 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:37.969 10:58:54 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:37.969 10:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.969 10:58:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.969 10:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.969 10:58:54 -- host/discovery.sh@100 -- # sleep 1 00:28:38.904 [2024-07-10 10:58:55.438353] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:38.904 [2024-07-10 10:58:55.438386] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:38.904 [2024-07-10 10:58:55.438421] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:38.904 [2024-07-10 10:58:55.565858] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:38.904 [2024-07-10 10:58:55.669776] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:38.904 [2024-07-10 10:58:55.669813] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:39.162 10:58:55 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:39.162 10:58:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:39.162 10:58:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.162 10:58:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:39.162 10:58:55 -- common/autotest_common.sh@10 -- # set +x 00:28:39.162 10:58:55 -- host/discovery.sh@59 -- # sort 00:28:39.162 10:58:55 -- host/discovery.sh@59 -- # xargs 00:28:39.162 10:58:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.162 10:58:55 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.162 10:58:55 -- host/discovery.sh@102 -- # get_bdev_list 00:28:39.162 10:58:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.162 10:58:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.162 10:58:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:39.162 10:58:55 -- common/autotest_common.sh@10 -- # set +x 00:28:39.162 10:58:55 -- host/discovery.sh@55 -- # sort 00:28:39.162 10:58:55 -- host/discovery.sh@55 -- # xargs 00:28:39.162 10:58:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.162 10:58:55 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:39.162 10:58:55 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:39.162 10:58:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:39.162 10:58:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:39.162 10:58:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.162 10:58:55 -- common/autotest_common.sh@10 -- # set +x 00:28:39.162 10:58:55 -- host/discovery.sh@63 -- # xargs 00:28:39.162 10:58:55 -- host/discovery.sh@63 -- # sort -n 00:28:39.162 10:58:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.162 10:58:55 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:39.162 10:58:55 -- host/discovery.sh@104 -- # get_notification_count 00:28:39.162 10:58:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:39.162 10:58:55 -- host/discovery.sh@74 -- # jq '. | length' 00:28:39.162 10:58:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.162 10:58:55 -- common/autotest_common.sh@10 -- # set +x 00:28:39.162 10:58:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.162 10:58:55 -- host/discovery.sh@74 -- # notification_count=1 00:28:39.162 10:58:55 -- host/discovery.sh@75 -- # notify_id=1 00:28:39.162 10:58:55 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:39.162 10:58:55 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:39.162 10:58:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.162 10:58:55 -- common/autotest_common.sh@10 -- # set +x 00:28:39.162 10:58:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.162 10:58:55 -- host/discovery.sh@109 -- # sleep 1 00:28:40.535 10:58:56 -- host/discovery.sh@110 -- # get_bdev_list 00:28:40.535 10:58:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.535 10:58:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:40.535 10:58:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.535 10:58:56 -- common/autotest_common.sh@10 -- # set +x 00:28:40.535 10:58:56 -- host/discovery.sh@55 -- # sort 00:28:40.535 10:58:56 -- host/discovery.sh@55 -- # xargs 00:28:40.535 10:58:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.535 10:58:57 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:40.535 10:58:57 -- host/discovery.sh@111 -- # get_notification_count 00:28:40.535 10:58:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:40.535 10:58:57 -- host/discovery.sh@74 -- # jq '. | length' 00:28:40.535 10:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.535 10:58:57 -- common/autotest_common.sh@10 -- # set +x 00:28:40.535 10:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.535 10:58:57 -- host/discovery.sh@74 -- # notification_count=1 00:28:40.535 10:58:57 -- host/discovery.sh@75 -- # notify_id=2 00:28:40.535 10:58:57 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:40.535 10:58:57 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:40.535 10:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.535 10:58:57 -- common/autotest_common.sh@10 -- # set +x 00:28:40.535 [2024-07-10 10:58:57.059298] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:40.535 [2024-07-10 10:58:57.059743] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:40.535 [2024-07-10 10:58:57.059787] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:40.535 10:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.535 10:58:57 -- host/discovery.sh@117 -- # sleep 1 00:28:40.535 [2024-07-10 10:58:57.187194] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:40.535 [2024-07-10 10:58:57.244706] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:40.535 [2024-07-10 10:58:57.244727] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:40.535 [2024-07-10 10:58:57.244751] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:41.469 10:58:58 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:41.469 10:58:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:41.469 10:58:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.469 10:58:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:41.469 10:58:58 -- common/autotest_common.sh@10 -- # set +x 00:28:41.469 10:58:58 -- host/discovery.sh@59 -- # sort 00:28:41.469 10:58:58 -- host/discovery.sh@59 -- # xargs 00:28:41.469 10:58:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.469 10:58:58 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.469 10:58:58 -- host/discovery.sh@119 -- # get_bdev_list 00:28:41.469 10:58:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.469 10:58:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.469 10:58:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:41.469 10:58:58 -- common/autotest_common.sh@10 -- # set +x 00:28:41.469 10:58:58 -- host/discovery.sh@55 -- # sort 00:28:41.469 10:58:58 -- host/discovery.sh@55 -- # xargs 00:28:41.469 10:58:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.469 10:58:58 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:41.469 10:58:58 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:41.469 10:58:58 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:41.469 10:58:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.469 10:58:58 -- common/autotest_common.sh@10 -- # set +x 00:28:41.469 10:58:58 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:41.469 10:58:58 -- host/discovery.sh@63 -- # sort -n 00:28:41.469 10:58:58 -- host/discovery.sh@63 -- # xargs 00:28:41.469 10:58:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.469 10:58:58 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:41.469 10:58:58 -- host/discovery.sh@121 -- # get_notification_count 00:28:41.469 10:58:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:41.469 10:58:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.469 10:58:58 -- host/discovery.sh@74 -- # jq '. | length' 00:28:41.469 10:58:58 -- common/autotest_common.sh@10 -- # set +x 00:28:41.469 10:58:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.469 10:58:58 -- host/discovery.sh@74 -- # notification_count=0 00:28:41.469 10:58:58 -- host/discovery.sh@75 -- # notify_id=2 00:28:41.469 10:58:58 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:41.469 10:58:58 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:41.469 10:58:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.469 10:58:58 -- common/autotest_common.sh@10 -- # set +x 00:28:41.469 [2024-07-10 10:58:58.231269] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:41.469 [2024-07-10 10:58:58.231310] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:41.469 10:58:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.469 10:58:58 -- host/discovery.sh@127 -- # sleep 1 00:28:41.469 [2024-07-10 10:58:58.237745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.469 [2024-07-10 10:58:58.237779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.469 [2024-07-10 10:58:58.237797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.469 [2024-07-10 10:58:58.237812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.469 [2024-07-10 10:58:58.237827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.469 [2024-07-10 10:58:58.237842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.469 [2024-07-10 10:58:58.237856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:41.469 [2024-07-10 10:58:58.237870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:41.469 [2024-07-10 10:58:58.237884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974590 is same with the state(5) to be set 00:28:41.469 [2024-07-10 10:58:58.247737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1974590 (9): Bad file descriptor 00:28:41.469 [2024-07-10 10:58:58.257780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:41.469 [2024-07-10 10:58:58.258016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.469 [2024-07-10 10:58:58.258194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.469 [2024-07-10 10:58:58.258222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1974590 with addr=10.0.0.2, port=4420 00:28:41.469 [2024-07-10 10:58:58.258239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974590 is same with the state(5) to be set 00:28:41.469 [2024-07-10 10:58:58.258264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1974590 (9): Bad file descriptor 00:28:41.469 [2024-07-10 10:58:58.258287] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:41.469 [2024-07-10 10:58:58.258302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:41.469 [2024-07-10 10:58:58.258318] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:41.469 [2024-07-10 10:58:58.258339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.469 [2024-07-10 10:58:58.267874] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:41.469 [2024-07-10 10:58:58.268093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.469 [2024-07-10 10:58:58.268335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.469 [2024-07-10 10:58:58.268361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1974590 with addr=10.0.0.2, port=4420 00:28:41.469 [2024-07-10 10:58:58.268376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974590 is same with the state(5) to be set 00:28:41.469 [2024-07-10 10:58:58.268398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1974590 (9): Bad file descriptor 00:28:41.469 [2024-07-10 10:58:58.268439] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:41.469 [2024-07-10 10:58:58.268463] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:41.470 [2024-07-10 10:58:58.268477] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:41.470 [2024-07-10 10:58:58.268497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.470 [2024-07-10 10:58:58.277951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:41.470 [2024-07-10 10:58:58.278147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.470 [2024-07-10 10:58:58.278348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.470 [2024-07-10 10:58:58.278376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1974590 with addr=10.0.0.2, port=4420 00:28:41.470 [2024-07-10 10:58:58.278394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974590 is same with the state(5) to be set 00:28:41.470 [2024-07-10 10:58:58.278418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1974590 (9): Bad file descriptor 00:28:41.470 [2024-07-10 10:58:58.278451] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:41.470 [2024-07-10 10:58:58.278482] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:41.470 [2024-07-10 10:58:58.278495] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:41.470 [2024-07-10 10:58:58.278556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.470 [2024-07-10 10:58:58.288029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:41.470 [2024-07-10 10:58:58.288233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.470 [2024-07-10 10:58:58.288405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.470 [2024-07-10 10:58:58.288443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1974590 with addr=10.0.0.2, port=4420 00:28:41.470 [2024-07-10 10:58:58.288462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974590 is same with the state(5) to be set 00:28:41.470 [2024-07-10 10:58:58.288486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1974590 (9): Bad file descriptor 00:28:41.470 [2024-07-10 10:58:58.288508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:41.470 [2024-07-10 10:58:58.288523] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:41.470 [2024-07-10 10:58:58.288537] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:41.470 [2024-07-10 10:58:58.288571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.728 [2024-07-10 10:58:58.298106] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:41.728 [2024-07-10 10:58:58.298290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.728 [2024-07-10 10:58:58.298548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.728 [2024-07-10 10:58:58.298575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1974590 with addr=10.0.0.2, port=4420 00:28:41.728 [2024-07-10 10:58:58.298590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974590 is same with the state(5) to be set 00:28:41.728 [2024-07-10 10:58:58.298612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1974590 (9): Bad file descriptor 00:28:41.728 [2024-07-10 10:58:58.298646] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:41.728 [2024-07-10 10:58:58.298664] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:41.728 [2024-07-10 10:58:58.298683] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:41.728 [2024-07-10 10:58:58.298713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.728 [2024-07-10 10:58:58.308183] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:41.728 [2024-07-10 10:58:58.308433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.728 [2024-07-10 10:58:58.308638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.728 [2024-07-10 10:58:58.308663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1974590 with addr=10.0.0.2, port=4420 00:28:41.728 [2024-07-10 10:58:58.308678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974590 is same with the state(5) to be set 00:28:41.728 [2024-07-10 10:58:58.308700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1974590 (9): Bad file descriptor 00:28:41.728 [2024-07-10 10:58:58.308732] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:41.728 [2024-07-10 10:58:58.308749] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:41.728 [2024-07-10 10:58:58.308762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:41.728 [2024-07-10 10:58:58.308781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.728 [2024-07-10 10:58:58.318259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:41.728 [2024-07-10 10:58:58.318493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.728 [2024-07-10 10:58:58.318645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.728 [2024-07-10 10:58:58.318670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1974590 with addr=10.0.0.2, port=4420 00:28:41.728 [2024-07-10 10:58:58.318685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974590 is same with the state(5) to be set 00:28:41.728 [2024-07-10 10:58:58.318706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1974590 (9): Bad file descriptor 00:28:41.728 [2024-07-10 10:58:58.318757] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:41.728 [2024-07-10 10:58:58.318776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:41.728 [2024-07-10 10:58:58.318790] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:41.728 [2024-07-10 10:58:58.318839] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:41.728 [2024-07-10 10:58:58.318866] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:41.728 [2024-07-10 10:58:58.318891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.661 10:58:59 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:42.661 10:58:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:42.661 10:58:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:42.661 10:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.661 10:58:59 -- common/autotest_common.sh@10 -- # set +x 00:28:42.661 10:58:59 -- host/discovery.sh@59 -- # sort 00:28:42.661 10:58:59 -- host/discovery.sh@59 -- # xargs 00:28:42.661 10:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.661 10:58:59 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.661 10:58:59 -- host/discovery.sh@129 -- # get_bdev_list 00:28:42.661 10:58:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:42.661 10:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.661 10:58:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:42.661 10:58:59 -- common/autotest_common.sh@10 -- # set +x 00:28:42.661 10:58:59 -- host/discovery.sh@55 -- # sort 00:28:42.661 10:58:59 -- host/discovery.sh@55 -- # xargs 00:28:42.661 10:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.661 10:58:59 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:42.661 10:58:59 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:42.661 10:58:59 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:42.661 10:58:59 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:42.661 10:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.661 10:58:59 -- common/autotest_common.sh@10 -- # set +x 00:28:42.661 10:58:59 -- host/discovery.sh@63 -- # sort -n 00:28:42.661 10:58:59 -- host/discovery.sh@63 -- # xargs 00:28:42.661 10:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.661 10:58:59 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:42.661 10:58:59 -- host/discovery.sh@131 -- # get_notification_count 00:28:42.661 10:58:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:42.661 10:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.661 10:58:59 -- host/discovery.sh@74 -- # jq '. | length' 00:28:42.661 10:58:59 -- common/autotest_common.sh@10 -- # set +x 00:28:42.661 10:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.661 10:58:59 -- host/discovery.sh@74 -- # notification_count=0 00:28:42.661 10:58:59 -- host/discovery.sh@75 -- # notify_id=2 00:28:42.661 10:58:59 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:42.661 10:58:59 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:42.661 10:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.661 10:58:59 -- common/autotest_common.sh@10 -- # set +x 00:28:42.661 10:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.661 10:58:59 -- host/discovery.sh@135 -- # sleep 1 00:28:44.034 10:59:00 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:44.034 10:59:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:44.034 10:59:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:44.034 10:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.034 10:59:00 -- common/autotest_common.sh@10 -- # set +x 00:28:44.034 10:59:00 -- host/discovery.sh@59 -- # sort 00:28:44.034 10:59:00 -- host/discovery.sh@59 -- # xargs 00:28:44.034 10:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.034 10:59:00 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:44.034 10:59:00 -- host/discovery.sh@137 -- # get_bdev_list 00:28:44.034 10:59:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.034 10:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.034 10:59:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:44.034 10:59:00 -- common/autotest_common.sh@10 -- # set +x 00:28:44.034 10:59:00 -- host/discovery.sh@55 -- # sort 00:28:44.034 10:59:00 -- host/discovery.sh@55 -- # xargs 00:28:44.034 10:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.034 10:59:00 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:44.034 10:59:00 -- host/discovery.sh@138 -- # get_notification_count 00:28:44.034 10:59:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:44.034 10:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.034 10:59:00 -- common/autotest_common.sh@10 -- # set +x 00:28:44.034 10:59:00 -- host/discovery.sh@74 -- # jq '. | length' 00:28:44.034 10:59:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.034 10:59:00 -- host/discovery.sh@74 -- # notification_count=2 00:28:44.034 10:59:00 -- host/discovery.sh@75 -- # notify_id=4 00:28:44.034 10:59:00 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:44.034 10:59:00 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:44.034 10:59:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.034 10:59:00 -- common/autotest_common.sh@10 -- # set +x 00:28:44.965 [2024-07-10 10:59:01.604612] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:44.965 [2024-07-10 10:59:01.604639] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:44.965 [2024-07-10 10:59:01.604660] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:44.965 [2024-07-10 10:59:01.690962] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:44.965 [2024-07-10 10:59:01.758023] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:44.965 [2024-07-10 10:59:01.758065] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:44.965 10:59:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.965 10:59:01 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:44.965 10:59:01 -- common/autotest_common.sh@640 -- # local es=0 00:28:44.965 10:59:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:44.965 10:59:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:44.965 10:59:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.965 10:59:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:44.965 10:59:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.965 10:59:01 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:44.965 10:59:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.965 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:28:44.965 request: 00:28:44.965 { 00:28:44.965 "name": "nvme", 00:28:44.965 "trtype": "tcp", 00:28:44.965 "traddr": "10.0.0.2", 00:28:44.965 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:44.965 "adrfam": "ipv4", 00:28:44.965 "trsvcid": "8009", 00:28:44.965 "wait_for_attach": true, 00:28:44.965 "method": "bdev_nvme_start_discovery", 00:28:44.965 "req_id": 1 00:28:44.965 } 00:28:44.965 Got JSON-RPC error response 00:28:44.965 response: 00:28:44.965 { 00:28:44.965 "code": -17, 00:28:44.965 "message": "File exists" 00:28:44.965 } 00:28:44.965 10:59:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:44.965 10:59:01 -- common/autotest_common.sh@643 -- # es=1 00:28:44.965 10:59:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:44.965 10:59:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:44.965 10:59:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:44.965 10:59:01 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:44.965 10:59:01 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:44.965 10:59:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.965 10:59:01 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:44.965 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:28:44.965 10:59:01 -- host/discovery.sh@67 -- # sort 00:28:44.965 10:59:01 -- host/discovery.sh@67 -- # xargs 00:28:44.965 10:59:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.222 10:59:01 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:45.222 10:59:01 -- host/discovery.sh@147 -- # get_bdev_list 00:28:45.222 10:59:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:45.222 10:59:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:45.222 10:59:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.222 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:28:45.222 10:59:01 -- host/discovery.sh@55 -- # sort 00:28:45.222 10:59:01 -- host/discovery.sh@55 -- # xargs 00:28:45.222 10:59:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.222 10:59:01 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:45.222 10:59:01 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:45.222 10:59:01 -- common/autotest_common.sh@640 -- # local es=0 00:28:45.222 10:59:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:45.222 10:59:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:45.222 10:59:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:45.222 10:59:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:45.222 10:59:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:45.222 10:59:01 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:45.222 10:59:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.222 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:28:45.222 request: 00:28:45.222 { 00:28:45.222 "name": "nvme_second", 00:28:45.222 "trtype": "tcp", 00:28:45.222 "traddr": "10.0.0.2", 00:28:45.222 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:45.222 "adrfam": "ipv4", 00:28:45.222 "trsvcid": "8009", 00:28:45.222 "wait_for_attach": true, 00:28:45.222 "method": "bdev_nvme_start_discovery", 00:28:45.222 "req_id": 1 00:28:45.222 } 00:28:45.222 Got JSON-RPC error response 00:28:45.222 response: 00:28:45.222 { 00:28:45.222 "code": -17, 00:28:45.222 "message": "File exists" 00:28:45.222 } 00:28:45.222 10:59:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:45.222 10:59:01 -- common/autotest_common.sh@643 -- # es=1 00:28:45.222 10:59:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:45.222 10:59:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:45.222 10:59:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:45.222 10:59:01 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:45.222 10:59:01 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:45.222 10:59:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.222 10:59:01 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:45.222 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:28:45.222 10:59:01 -- host/discovery.sh@67 -- # sort 00:28:45.222 10:59:01 -- host/discovery.sh@67 -- # xargs 00:28:45.222 10:59:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.222 10:59:01 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:45.222 10:59:01 -- host/discovery.sh@153 -- # get_bdev_list 00:28:45.222 10:59:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:45.222 10:59:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.222 10:59:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:45.222 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:28:45.222 10:59:01 -- host/discovery.sh@55 -- # sort 00:28:45.222 10:59:01 -- host/discovery.sh@55 -- # xargs 00:28:45.222 10:59:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.222 10:59:01 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:45.222 10:59:01 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:45.222 10:59:01 -- common/autotest_common.sh@640 -- # local es=0 00:28:45.222 10:59:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:45.222 10:59:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:45.222 10:59:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:45.222 10:59:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:45.222 10:59:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:45.222 10:59:01 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:45.222 10:59:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.222 10:59:01 -- common/autotest_common.sh@10 -- # set +x 00:28:46.154 [2024-07-10 10:59:02.957446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.154 [2024-07-10 10:59:02.957640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.154 [2024-07-10 10:59:02.957667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1972a80 with addr=10.0.0.2, port=8010 00:28:46.154 [2024-07-10 10:59:02.957698] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:46.154 [2024-07-10 10:59:02.957712] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:46.154 [2024-07-10 10:59:02.957724] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:47.523 [2024-07-10 10:59:03.959912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.523 [2024-07-10 10:59:03.960116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.523 [2024-07-10 10:59:03.960145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1972a80 with addr=10.0.0.2, port=8010 00:28:47.523 [2024-07-10 10:59:03.960165] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:47.523 [2024-07-10 10:59:03.960179] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:47.523 [2024-07-10 10:59:03.960192] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:48.454 [2024-07-10 10:59:04.962103] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:48.454 request: 00:28:48.454 { 00:28:48.454 "name": "nvme_second", 00:28:48.454 "trtype": "tcp", 00:28:48.454 "traddr": "10.0.0.2", 00:28:48.454 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:48.454 "adrfam": "ipv4", 00:28:48.454 "trsvcid": "8010", 00:28:48.454 "attach_timeout_ms": 3000, 00:28:48.454 "method": "bdev_nvme_start_discovery", 00:28:48.454 "req_id": 1 00:28:48.454 } 00:28:48.454 Got JSON-RPC error response 00:28:48.454 response: 00:28:48.454 { 00:28:48.454 "code": -110, 00:28:48.454 "message": "Connection timed out" 00:28:48.454 } 00:28:48.454 10:59:04 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:48.454 10:59:04 -- common/autotest_common.sh@643 -- # es=1 00:28:48.454 10:59:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:48.454 10:59:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:48.454 10:59:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:48.454 10:59:04 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:48.454 10:59:04 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:48.454 10:59:04 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:48.454 10:59:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:48.454 10:59:04 -- common/autotest_common.sh@10 -- # set +x 00:28:48.454 10:59:04 -- host/discovery.sh@67 -- # sort 00:28:48.454 10:59:04 -- host/discovery.sh@67 -- # xargs 00:28:48.454 10:59:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:48.454 10:59:05 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:48.454 10:59:05 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:48.454 10:59:05 -- host/discovery.sh@162 -- # kill 3569123 00:28:48.454 10:59:05 -- host/discovery.sh@163 -- # nvmftestfini 00:28:48.454 10:59:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:48.454 10:59:05 -- nvmf/common.sh@116 -- # sync 00:28:48.454 10:59:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:48.454 10:59:05 -- nvmf/common.sh@119 -- # set +e 00:28:48.454 10:59:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:48.454 10:59:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:48.454 rmmod nvme_tcp 00:28:48.454 rmmod nvme_fabrics 00:28:48.454 rmmod nvme_keyring 00:28:48.454 10:59:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:48.454 10:59:05 -- nvmf/common.sh@123 -- # set -e 00:28:48.454 10:59:05 -- nvmf/common.sh@124 -- # return 0 00:28:48.454 10:59:05 -- nvmf/common.sh@477 -- # '[' -n 3568966 ']' 00:28:48.454 10:59:05 -- nvmf/common.sh@478 -- # killprocess 3568966 00:28:48.454 10:59:05 -- common/autotest_common.sh@926 -- # '[' -z 3568966 ']' 00:28:48.454 10:59:05 -- common/autotest_common.sh@930 -- # kill -0 3568966 00:28:48.454 10:59:05 -- common/autotest_common.sh@931 -- # uname 00:28:48.454 10:59:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:48.454 10:59:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3568966 00:28:48.454 10:59:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:48.454 10:59:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:48.454 10:59:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3568966' 00:28:48.454 killing process with pid 3568966 00:28:48.454 10:59:05 -- common/autotest_common.sh@945 -- # kill 3568966 00:28:48.454 10:59:05 -- common/autotest_common.sh@950 -- # wait 3568966 00:28:48.712 10:59:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:48.712 10:59:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:48.712 10:59:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:48.712 10:59:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:48.712 10:59:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:48.712 10:59:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.712 10:59:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.712 10:59:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.613 10:59:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:50.613 00:28:50.613 real 0m17.348s 00:28:50.613 user 0m26.785s 00:28:50.613 sys 0m2.970s 00:28:50.613 10:59:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.613 10:59:07 -- common/autotest_common.sh@10 -- # set +x 00:28:50.613 ************************************ 00:28:50.613 END TEST nvmf_discovery 00:28:50.613 ************************************ 00:28:50.613 10:59:07 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:50.613 10:59:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:50.613 10:59:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:50.613 10:59:07 -- common/autotest_common.sh@10 -- # set +x 00:28:50.613 ************************************ 00:28:50.613 START TEST nvmf_discovery_remove_ifc 00:28:50.613 ************************************ 00:28:50.613 10:59:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:50.871 * Looking for test storage... 00:28:50.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:50.871 10:59:07 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.871 10:59:07 -- nvmf/common.sh@7 -- # uname -s 00:28:50.871 10:59:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.871 10:59:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.871 10:59:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.871 10:59:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.871 10:59:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.871 10:59:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.871 10:59:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.871 10:59:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.871 10:59:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.871 10:59:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.871 10:59:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.871 10:59:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.871 10:59:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.871 10:59:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.871 10:59:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.871 10:59:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.871 10:59:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.871 10:59:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.871 10:59:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.871 10:59:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.871 10:59:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.871 10:59:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.871 10:59:07 -- paths/export.sh@5 -- # export PATH 00:28:50.871 10:59:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.871 10:59:07 -- nvmf/common.sh@46 -- # : 0 00:28:50.871 10:59:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:50.871 10:59:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:50.871 10:59:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:50.871 10:59:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.871 10:59:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.871 10:59:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:50.871 10:59:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:50.871 10:59:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:50.871 10:59:07 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:50.871 10:59:07 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:50.871 10:59:07 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:50.871 10:59:07 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:50.871 10:59:07 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:50.871 10:59:07 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:50.871 10:59:07 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:50.871 10:59:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:50.871 10:59:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.871 10:59:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:50.871 10:59:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:50.871 10:59:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:50.871 10:59:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.871 10:59:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.871 10:59:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.872 10:59:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:50.872 10:59:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:50.872 10:59:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:50.872 10:59:07 -- common/autotest_common.sh@10 -- # set +x 00:28:52.771 10:59:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:52.771 10:59:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:52.771 10:59:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:52.771 10:59:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:52.771 10:59:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:52.771 10:59:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:52.771 10:59:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:52.771 10:59:09 -- nvmf/common.sh@294 -- # net_devs=() 00:28:52.771 10:59:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:52.771 10:59:09 -- nvmf/common.sh@295 -- # e810=() 00:28:52.771 10:59:09 -- nvmf/common.sh@295 -- # local -ga e810 00:28:52.771 10:59:09 -- nvmf/common.sh@296 -- # x722=() 00:28:52.771 10:59:09 -- nvmf/common.sh@296 -- # local -ga x722 00:28:52.771 10:59:09 -- nvmf/common.sh@297 -- # mlx=() 00:28:52.771 10:59:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:52.771 10:59:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.771 10:59:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.771 10:59:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.771 10:59:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.771 10:59:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.771 10:59:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.771 10:59:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.771 10:59:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.771 10:59:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.771 10:59:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.771 10:59:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.771 10:59:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:52.771 10:59:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:52.771 10:59:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:52.771 10:59:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:52.771 10:59:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:52.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:52.771 10:59:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:52.771 10:59:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:52.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:52.771 10:59:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:52.771 10:59:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:52.771 10:59:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:52.771 10:59:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.771 10:59:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:52.771 10:59:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.772 10:59:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:52.772 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:52.772 10:59:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.772 10:59:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:52.772 10:59:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.772 10:59:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:52.772 10:59:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.772 10:59:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:52.772 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:52.772 10:59:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.772 10:59:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:52.772 10:59:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:52.772 10:59:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:52.772 10:59:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:52.772 10:59:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:52.772 10:59:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.772 10:59:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.772 10:59:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.772 10:59:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:52.772 10:59:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.772 10:59:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.772 10:59:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:52.772 10:59:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.772 10:59:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.772 10:59:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:52.772 10:59:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:52.772 10:59:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.772 10:59:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.772 10:59:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.772 10:59:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.772 10:59:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:52.772 10:59:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.772 10:59:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.772 10:59:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.772 10:59:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:52.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:28:52.772 00:28:52.772 --- 10.0.0.2 ping statistics --- 00:28:52.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.772 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:52.772 10:59:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:28:52.772 00:28:52.772 --- 10.0.0.1 ping statistics --- 00:28:52.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.772 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:28:52.772 10:59:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.772 10:59:09 -- nvmf/common.sh@410 -- # return 0 00:28:52.772 10:59:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:52.772 10:59:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.772 10:59:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:52.772 10:59:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:52.772 10:59:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.772 10:59:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:52.772 10:59:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:52.772 10:59:09 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:52.772 10:59:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:52.772 10:59:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:52.772 10:59:09 -- common/autotest_common.sh@10 -- # set +x 00:28:52.772 10:59:09 -- nvmf/common.sh@469 -- # nvmfpid=3572595 00:28:52.772 10:59:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:52.772 10:59:09 -- nvmf/common.sh@470 -- # waitforlisten 3572595 00:28:52.772 10:59:09 -- common/autotest_common.sh@819 -- # '[' -z 3572595 ']' 00:28:52.772 10:59:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.772 10:59:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:52.772 10:59:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.772 10:59:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:52.772 10:59:09 -- common/autotest_common.sh@10 -- # set +x 00:28:53.031 [2024-07-10 10:59:09.625185] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:53.031 [2024-07-10 10:59:09.625272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.031 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.031 [2024-07-10 10:59:09.687575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.031 [2024-07-10 10:59:09.770554] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:53.031 [2024-07-10 10:59:09.770692] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.031 [2024-07-10 10:59:09.770733] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.031 [2024-07-10 10:59:09.770747] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.031 [2024-07-10 10:59:09.770797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.965 10:59:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:53.965 10:59:10 -- common/autotest_common.sh@852 -- # return 0 00:28:53.965 10:59:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:53.965 10:59:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:53.965 10:59:10 -- common/autotest_common.sh@10 -- # set +x 00:28:53.965 10:59:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.965 10:59:10 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:53.965 10:59:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.965 10:59:10 -- common/autotest_common.sh@10 -- # set +x 00:28:53.965 [2024-07-10 10:59:10.633416] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.965 [2024-07-10 10:59:10.641593] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:53.965 null0 00:28:53.965 [2024-07-10 10:59:10.673559] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.965 10:59:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.965 10:59:10 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3572748 00:28:53.965 10:59:10 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:53.965 10:59:10 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3572748 /tmp/host.sock 00:28:53.965 10:59:10 -- common/autotest_common.sh@819 -- # '[' -z 3572748 ']' 00:28:53.965 10:59:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:53.965 10:59:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:53.965 10:59:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:53.965 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:53.965 10:59:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:53.965 10:59:10 -- common/autotest_common.sh@10 -- # set +x 00:28:53.965 [2024-07-10 10:59:10.731724] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:53.965 [2024-07-10 10:59:10.731800] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3572748 ] 00:28:53.965 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.224 [2024-07-10 10:59:10.794581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.224 [2024-07-10 10:59:10.884509] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:54.224 [2024-07-10 10:59:10.884680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.224 10:59:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:54.224 10:59:10 -- common/autotest_common.sh@852 -- # return 0 00:28:54.224 10:59:10 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:54.224 10:59:10 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:54.224 10:59:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.224 10:59:10 -- common/autotest_common.sh@10 -- # set +x 00:28:54.224 10:59:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.224 10:59:10 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:54.224 10:59:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.224 10:59:10 -- common/autotest_common.sh@10 -- # set +x 00:28:54.482 10:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.482 10:59:11 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:54.482 10:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.482 10:59:11 -- common/autotest_common.sh@10 -- # set +x 00:28:55.416 [2024-07-10 10:59:12.091520] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:55.416 [2024-07-10 10:59:12.091557] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:55.416 [2024-07-10 10:59:12.091579] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:55.416 [2024-07-10 10:59:12.217992] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:55.674 [2024-07-10 10:59:12.281747] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:55.674 [2024-07-10 10:59:12.281813] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:55.674 [2024-07-10 10:59:12.281859] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:55.674 [2024-07-10 10:59:12.281884] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:55.674 [2024-07-10 10:59:12.281918] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:55.674 10:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:55.674 10:59:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.674 10:59:12 -- common/autotest_common.sh@10 -- # set +x 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:55.674 [2024-07-10 10:59:12.289716] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24cff00 was disconnected and freed. delete nvme_qpair. 00:28:55.674 10:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.674 10:59:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:55.674 10:59:12 -- common/autotest_common.sh@10 -- # set +x 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:55.674 10:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:55.674 10:59:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:56.639 10:59:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:56.639 10:59:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:56.639 10:59:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.639 10:59:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:56.639 10:59:13 -- common/autotest_common.sh@10 -- # set +x 00:28:56.639 10:59:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:56.639 10:59:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:56.639 10:59:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.639 10:59:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:56.639 10:59:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:58.013 10:59:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:58.013 10:59:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.013 10:59:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:58.013 10:59:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.013 10:59:14 -- common/autotest_common.sh@10 -- # set +x 00:28:58.013 10:59:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:58.013 10:59:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:58.013 10:59:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.013 10:59:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:58.013 10:59:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:58.945 10:59:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:58.945 10:59:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.945 10:59:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:58.945 10:59:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.945 10:59:15 -- common/autotest_common.sh@10 -- # set +x 00:28:58.945 10:59:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:58.945 10:59:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:58.945 10:59:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.945 10:59:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:58.945 10:59:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:59.877 10:59:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:59.877 10:59:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:59.877 10:59:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:59.877 10:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.877 10:59:16 -- common/autotest_common.sh@10 -- # set +x 00:28:59.877 10:59:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:59.877 10:59:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:59.877 10:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:59.877 10:59:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:59.877 10:59:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:00.810 10:59:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:00.810 10:59:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:00.810 10:59:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:00.810 10:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:00.810 10:59:17 -- common/autotest_common.sh@10 -- # set +x 00:29:00.810 10:59:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:00.810 10:59:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:00.810 10:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:01.068 10:59:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:01.068 10:59:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:01.068 [2024-07-10 10:59:17.722900] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:01.068 [2024-07-10 10:59:17.722967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.068 [2024-07-10 10:59:17.722991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.068 [2024-07-10 10:59:17.723009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.068 [2024-07-10 10:59:17.723024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.068 [2024-07-10 10:59:17.723040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.068 [2024-07-10 10:59:17.723055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.068 [2024-07-10 10:59:17.723070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.068 [2024-07-10 10:59:17.723085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.068 [2024-07-10 10:59:17.723100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.068 [2024-07-10 10:59:17.723115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.068 [2024-07-10 10:59:17.723130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2497150 is same with the state(5) to be set 00:29:01.068 [2024-07-10 10:59:17.732921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2497150 (9): Bad file descriptor 00:29:01.068 [2024-07-10 10:59:17.742968] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:02.002 10:59:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:02.002 10:59:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:02.002 10:59:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:02.002 10:59:18 -- common/autotest_common.sh@10 -- # set +x 00:29:02.002 10:59:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:02.002 10:59:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:02.002 10:59:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:02.002 [2024-07-10 10:59:18.750453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:03.376 [2024-07-10 10:59:19.774473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:03.376 [2024-07-10 10:59:19.774528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2497150 with addr=10.0.0.2, port=4420 00:29:03.376 [2024-07-10 10:59:19.774552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2497150 is same with the state(5) to be set 00:29:03.376 [2024-07-10 10:59:19.774585] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:03.376 [2024-07-10 10:59:19.774603] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:03.376 [2024-07-10 10:59:19.774625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:03.376 [2024-07-10 10:59:19.774641] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:29:03.376 [2024-07-10 10:59:19.775059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2497150 (9): Bad file descriptor 00:29:03.376 [2024-07-10 10:59:19.775103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.376 [2024-07-10 10:59:19.775143] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:03.376 [2024-07-10 10:59:19.775181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.376 [2024-07-10 10:59:19.775205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.376 [2024-07-10 10:59:19.775225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.376 [2024-07-10 10:59:19.775240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.376 [2024-07-10 10:59:19.775256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.376 [2024-07-10 10:59:19.775271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.376 [2024-07-10 10:59:19.775286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.376 [2024-07-10 10:59:19.775301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.376 [2024-07-10 10:59:19.775316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.376 [2024-07-10 10:59:19.775331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.376 [2024-07-10 10:59:19.775346] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:03.376 [2024-07-10 10:59:19.775586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496680 (9): Bad file descriptor 00:29:03.376 [2024-07-10 10:59:19.776603] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:03.376 [2024-07-10 10:59:19.776624] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:29:03.376 10:59:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.376 10:59:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:03.376 10:59:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:04.310 10:59:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:04.310 10:59:20 -- common/autotest_common.sh@10 -- # set +x 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:04.310 10:59:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:04.310 10:59:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.310 10:59:20 -- common/autotest_common.sh@10 -- # set +x 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:04.310 10:59:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:04.310 10:59:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:05.244 [2024-07-10 10:59:21.787331] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:05.244 [2024-07-10 10:59:21.787362] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:05.244 [2024-07-10 10:59:21.787386] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:05.244 10:59:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:05.244 10:59:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:05.244 10:59:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:05.244 10:59:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:05.244 10:59:21 -- common/autotest_common.sh@10 -- # set +x 00:29:05.244 10:59:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:05.244 10:59:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:05.244 [2024-07-10 10:59:21.914886] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:05.244 10:59:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:05.244 10:59:21 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:05.244 10:59:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:05.244 [2024-07-10 10:59:21.976801] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:05.244 [2024-07-10 10:59:21.976855] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:05.244 [2024-07-10 10:59:21.976894] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:05.244 [2024-07-10 10:59:21.976920] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:05.244 [2024-07-10 10:59:21.976935] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:05.244 [2024-07-10 10:59:22.026125] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24a4470 was disconnected and freed. delete nvme_qpair. 00:29:06.177 10:59:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:06.177 10:59:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:06.177 10:59:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:06.177 10:59:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.177 10:59:22 -- common/autotest_common.sh@10 -- # set +x 00:29:06.177 10:59:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:06.177 10:59:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:06.177 10:59:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.177 10:59:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:06.177 10:59:22 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:06.177 10:59:22 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3572748 00:29:06.177 10:59:22 -- common/autotest_common.sh@926 -- # '[' -z 3572748 ']' 00:29:06.177 10:59:22 -- common/autotest_common.sh@930 -- # kill -0 3572748 00:29:06.177 10:59:22 -- common/autotest_common.sh@931 -- # uname 00:29:06.434 10:59:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:06.434 10:59:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3572748 00:29:06.434 10:59:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:06.434 10:59:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:06.434 10:59:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3572748' 00:29:06.434 killing process with pid 3572748 00:29:06.434 10:59:23 -- common/autotest_common.sh@945 -- # kill 3572748 00:29:06.434 10:59:23 -- common/autotest_common.sh@950 -- # wait 3572748 00:29:06.434 10:59:23 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:06.434 10:59:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:06.434 10:59:23 -- nvmf/common.sh@116 -- # sync 00:29:06.434 10:59:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:06.434 10:59:23 -- nvmf/common.sh@119 -- # set +e 00:29:06.434 10:59:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:06.434 10:59:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:06.434 rmmod nvme_tcp 00:29:06.691 rmmod nvme_fabrics 00:29:06.691 rmmod nvme_keyring 00:29:06.691 10:59:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:06.691 10:59:23 -- nvmf/common.sh@123 -- # set -e 00:29:06.691 10:59:23 -- nvmf/common.sh@124 -- # return 0 00:29:06.691 10:59:23 -- nvmf/common.sh@477 -- # '[' -n 3572595 ']' 00:29:06.691 10:59:23 -- nvmf/common.sh@478 -- # killprocess 3572595 00:29:06.691 10:59:23 -- common/autotest_common.sh@926 -- # '[' -z 3572595 ']' 00:29:06.691 10:59:23 -- common/autotest_common.sh@930 -- # kill -0 3572595 00:29:06.691 10:59:23 -- common/autotest_common.sh@931 -- # uname 00:29:06.691 10:59:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:06.691 10:59:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3572595 00:29:06.691 10:59:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:06.691 10:59:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:06.691 10:59:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3572595' 00:29:06.691 killing process with pid 3572595 00:29:06.691 10:59:23 -- common/autotest_common.sh@945 -- # kill 3572595 00:29:06.691 10:59:23 -- common/autotest_common.sh@950 -- # wait 3572595 00:29:06.955 10:59:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:06.955 10:59:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:06.955 10:59:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:06.955 10:59:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:06.955 10:59:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:06.955 10:59:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.955 10:59:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:06.955 10:59:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.911 10:59:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:08.911 00:29:08.911 real 0m18.190s 00:29:08.911 user 0m25.318s 00:29:08.911 sys 0m2.912s 00:29:08.911 10:59:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:08.911 10:59:25 -- common/autotest_common.sh@10 -- # set +x 00:29:08.911 ************************************ 00:29:08.911 END TEST nvmf_discovery_remove_ifc 00:29:08.911 ************************************ 00:29:08.911 10:59:25 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:29:08.911 10:59:25 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:08.911 10:59:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:08.911 10:59:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:08.911 10:59:25 -- common/autotest_common.sh@10 -- # set +x 00:29:08.911 ************************************ 00:29:08.911 START TEST nvmf_digest 00:29:08.911 ************************************ 00:29:08.911 10:59:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:08.911 * Looking for test storage... 00:29:08.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:08.911 10:59:25 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.911 10:59:25 -- nvmf/common.sh@7 -- # uname -s 00:29:08.911 10:59:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.911 10:59:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.911 10:59:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.911 10:59:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.911 10:59:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.911 10:59:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.911 10:59:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.911 10:59:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.911 10:59:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.911 10:59:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.911 10:59:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:08.911 10:59:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:08.911 10:59:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.911 10:59:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.911 10:59:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.911 10:59:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.911 10:59:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.911 10:59:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.911 10:59:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.911 10:59:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.911 10:59:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.911 10:59:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.911 10:59:25 -- paths/export.sh@5 -- # export PATH 00:29:08.911 10:59:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.911 10:59:25 -- nvmf/common.sh@46 -- # : 0 00:29:08.911 10:59:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:08.911 10:59:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:08.911 10:59:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:08.911 10:59:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.911 10:59:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.911 10:59:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:08.911 10:59:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:08.911 10:59:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:08.911 10:59:25 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:08.911 10:59:25 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:08.911 10:59:25 -- host/digest.sh@16 -- # runtime=2 00:29:08.911 10:59:25 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:29:08.911 10:59:25 -- host/digest.sh@132 -- # nvmftestinit 00:29:08.911 10:59:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:08.911 10:59:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.911 10:59:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:08.911 10:59:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:08.911 10:59:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:08.911 10:59:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.911 10:59:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:08.911 10:59:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.911 10:59:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:08.911 10:59:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:08.911 10:59:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:08.911 10:59:25 -- common/autotest_common.sh@10 -- # set +x 00:29:10.813 10:59:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:10.813 10:59:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:10.813 10:59:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:10.813 10:59:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:10.813 10:59:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:10.813 10:59:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:10.813 10:59:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:10.813 10:59:27 -- nvmf/common.sh@294 -- # net_devs=() 00:29:10.813 10:59:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:10.813 10:59:27 -- nvmf/common.sh@295 -- # e810=() 00:29:10.813 10:59:27 -- nvmf/common.sh@295 -- # local -ga e810 00:29:10.813 10:59:27 -- nvmf/common.sh@296 -- # x722=() 00:29:10.813 10:59:27 -- nvmf/common.sh@296 -- # local -ga x722 00:29:10.813 10:59:27 -- nvmf/common.sh@297 -- # mlx=() 00:29:10.813 10:59:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:10.813 10:59:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.813 10:59:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.813 10:59:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.813 10:59:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.813 10:59:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.813 10:59:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.813 10:59:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.813 10:59:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.813 10:59:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.813 10:59:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.813 10:59:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.813 10:59:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:10.813 10:59:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:10.813 10:59:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:10.813 10:59:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:10.813 10:59:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:10.813 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:10.813 10:59:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:10.813 10:59:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:10.813 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:10.813 10:59:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.813 10:59:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:10.814 10:59:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:10.814 10:59:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:10.814 10:59:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:10.814 10:59:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:10.814 10:59:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.814 10:59:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:10.814 10:59:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.814 10:59:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:10.814 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:10.814 10:59:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.814 10:59:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:10.814 10:59:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.814 10:59:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:10.814 10:59:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.814 10:59:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:10.814 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:10.814 10:59:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.814 10:59:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:10.814 10:59:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:10.814 10:59:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:10.814 10:59:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:10.814 10:59:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:10.814 10:59:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.814 10:59:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.814 10:59:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.814 10:59:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:10.814 10:59:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.814 10:59:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.814 10:59:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:10.814 10:59:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.814 10:59:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.814 10:59:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:10.814 10:59:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:10.814 10:59:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.814 10:59:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.814 10:59:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.814 10:59:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.814 10:59:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:10.814 10:59:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.072 10:59:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.072 10:59:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.072 10:59:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:11.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:29:11.072 00:29:11.072 --- 10.0.0.2 ping statistics --- 00:29:11.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.072 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:29:11.072 10:59:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:29:11.072 00:29:11.072 --- 10.0.0.1 ping statistics --- 00:29:11.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.072 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:29:11.072 10:59:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.072 10:59:27 -- nvmf/common.sh@410 -- # return 0 00:29:11.072 10:59:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:11.072 10:59:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.072 10:59:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:11.072 10:59:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:11.072 10:59:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.072 10:59:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:11.072 10:59:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:11.072 10:59:27 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:11.072 10:59:27 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:29:11.072 10:59:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:11.072 10:59:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:11.072 10:59:27 -- common/autotest_common.sh@10 -- # set +x 00:29:11.072 ************************************ 00:29:11.072 START TEST nvmf_digest_clean 00:29:11.072 ************************************ 00:29:11.072 10:59:27 -- common/autotest_common.sh@1104 -- # run_digest 00:29:11.073 10:59:27 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:29:11.073 10:59:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:11.073 10:59:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:11.073 10:59:27 -- common/autotest_common.sh@10 -- # set +x 00:29:11.073 10:59:27 -- nvmf/common.sh@469 -- # nvmfpid=3576268 00:29:11.073 10:59:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:11.073 10:59:27 -- nvmf/common.sh@470 -- # waitforlisten 3576268 00:29:11.073 10:59:27 -- common/autotest_common.sh@819 -- # '[' -z 3576268 ']' 00:29:11.073 10:59:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.073 10:59:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:11.073 10:59:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.073 10:59:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:11.073 10:59:27 -- common/autotest_common.sh@10 -- # set +x 00:29:11.073 [2024-07-10 10:59:27.775820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:11.073 [2024-07-10 10:59:27.775907] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.073 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.073 [2024-07-10 10:59:27.839761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.331 [2024-07-10 10:59:27.923608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:11.331 [2024-07-10 10:59:27.923764] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.331 [2024-07-10 10:59:27.923797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.331 [2024-07-10 10:59:27.923809] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.331 [2024-07-10 10:59:27.923845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.331 10:59:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:11.331 10:59:27 -- common/autotest_common.sh@852 -- # return 0 00:29:11.331 10:59:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:11.331 10:59:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:11.331 10:59:27 -- common/autotest_common.sh@10 -- # set +x 00:29:11.331 10:59:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.331 10:59:28 -- host/digest.sh@120 -- # common_target_config 00:29:11.331 10:59:28 -- host/digest.sh@43 -- # rpc_cmd 00:29:11.331 10:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:11.331 10:59:28 -- common/autotest_common.sh@10 -- # set +x 00:29:11.331 null0 00:29:11.331 [2024-07-10 10:59:28.108075] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.331 [2024-07-10 10:59:28.132283] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.331 10:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:11.331 10:59:28 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:29:11.331 10:59:28 -- host/digest.sh@77 -- # local rw bs qd 00:29:11.331 10:59:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:11.331 10:59:28 -- host/digest.sh@80 -- # rw=randread 00:29:11.331 10:59:28 -- host/digest.sh@80 -- # bs=4096 00:29:11.331 10:59:28 -- host/digest.sh@80 -- # qd=128 00:29:11.331 10:59:28 -- host/digest.sh@82 -- # bperfpid=3576409 00:29:11.331 10:59:28 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:11.331 10:59:28 -- host/digest.sh@83 -- # waitforlisten 3576409 /var/tmp/bperf.sock 00:29:11.331 10:59:28 -- common/autotest_common.sh@819 -- # '[' -z 3576409 ']' 00:29:11.331 10:59:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:11.331 10:59:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:11.331 10:59:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:11.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:11.331 10:59:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:11.331 10:59:28 -- common/autotest_common.sh@10 -- # set +x 00:29:11.589 [2024-07-10 10:59:28.179376] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:11.589 [2024-07-10 10:59:28.179467] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576409 ] 00:29:11.589 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.589 [2024-07-10 10:59:28.245030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.589 [2024-07-10 10:59:28.335288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.589 10:59:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:11.589 10:59:28 -- common/autotest_common.sh@852 -- # return 0 00:29:11.589 10:59:28 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:11.589 10:59:28 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:11.589 10:59:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:12.156 10:59:28 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.156 10:59:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.414 nvme0n1 00:29:12.414 10:59:29 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:12.414 10:59:29 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:12.672 Running I/O for 2 seconds... 00:29:14.571 00:29:14.571 Latency(us) 00:29:14.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.571 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:14.571 nvme0n1 : 2.00 15532.99 60.68 0.00 0.00 8232.55 2864.17 15922.82 00:29:14.571 =================================================================================================================== 00:29:14.571 Total : 15532.99 60.68 0.00 0.00 8232.55 2864.17 15922.82 00:29:14.571 0 00:29:14.571 10:59:31 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:14.571 10:59:31 -- host/digest.sh@92 -- # get_accel_stats 00:29:14.571 10:59:31 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:14.571 10:59:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:14.571 10:59:31 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:14.571 | select(.opcode=="crc32c") 00:29:14.571 | "\(.module_name) \(.executed)"' 00:29:14.829 10:59:31 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:14.829 10:59:31 -- host/digest.sh@93 -- # exp_module=software 00:29:14.829 10:59:31 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:14.829 10:59:31 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:14.829 10:59:31 -- host/digest.sh@97 -- # killprocess 3576409 00:29:14.829 10:59:31 -- common/autotest_common.sh@926 -- # '[' -z 3576409 ']' 00:29:14.829 10:59:31 -- common/autotest_common.sh@930 -- # kill -0 3576409 00:29:14.829 10:59:31 -- common/autotest_common.sh@931 -- # uname 00:29:14.829 10:59:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:14.829 10:59:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3576409 00:29:14.829 10:59:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:14.829 10:59:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:14.829 10:59:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3576409' 00:29:14.829 killing process with pid 3576409 00:29:14.829 10:59:31 -- common/autotest_common.sh@945 -- # kill 3576409 00:29:14.829 Received shutdown signal, test time was about 2.000000 seconds 00:29:14.829 00:29:14.829 Latency(us) 00:29:14.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.829 =================================================================================================================== 00:29:14.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.829 10:59:31 -- common/autotest_common.sh@950 -- # wait 3576409 00:29:15.088 10:59:31 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:29:15.088 10:59:31 -- host/digest.sh@77 -- # local rw bs qd 00:29:15.088 10:59:31 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:15.088 10:59:31 -- host/digest.sh@80 -- # rw=randread 00:29:15.088 10:59:31 -- host/digest.sh@80 -- # bs=131072 00:29:15.088 10:59:31 -- host/digest.sh@80 -- # qd=16 00:29:15.088 10:59:31 -- host/digest.sh@82 -- # bperfpid=3576832 00:29:15.088 10:59:31 -- host/digest.sh@83 -- # waitforlisten 3576832 /var/tmp/bperf.sock 00:29:15.088 10:59:31 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:15.088 10:59:31 -- common/autotest_common.sh@819 -- # '[' -z 3576832 ']' 00:29:15.088 10:59:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.088 10:59:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:15.088 10:59:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.088 10:59:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:15.088 10:59:31 -- common/autotest_common.sh@10 -- # set +x 00:29:15.088 [2024-07-10 10:59:31.828434] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:15.088 [2024-07-10 10:59:31.828516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3576832 ] 00:29:15.088 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:15.088 Zero copy mechanism will not be used. 00:29:15.088 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.088 [2024-07-10 10:59:31.890478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.346 [2024-07-10 10:59:31.980269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.346 10:59:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:15.346 10:59:32 -- common/autotest_common.sh@852 -- # return 0 00:29:15.346 10:59:32 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:15.346 10:59:32 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:15.346 10:59:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:15.604 10:59:32 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.604 10:59:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.169 nvme0n1 00:29:16.169 10:59:32 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:16.169 10:59:32 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:16.169 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:16.169 Zero copy mechanism will not be used. 00:29:16.169 Running I/O for 2 seconds... 00:29:18.066 00:29:18.066 Latency(us) 00:29:18.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.066 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:18.066 nvme0n1 : 2.00 2539.21 317.40 0.00 0.00 6297.33 5072.97 9951.76 00:29:18.066 =================================================================================================================== 00:29:18.067 Total : 2539.21 317.40 0.00 0.00 6297.33 5072.97 9951.76 00:29:18.067 0 00:29:18.067 10:59:34 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:18.067 10:59:34 -- host/digest.sh@92 -- # get_accel_stats 00:29:18.067 10:59:34 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:18.067 10:59:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:18.067 10:59:34 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:18.067 | select(.opcode=="crc32c") 00:29:18.067 | "\(.module_name) \(.executed)"' 00:29:18.324 10:59:35 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:18.324 10:59:35 -- host/digest.sh@93 -- # exp_module=software 00:29:18.324 10:59:35 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:18.324 10:59:35 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:18.324 10:59:35 -- host/digest.sh@97 -- # killprocess 3576832 00:29:18.324 10:59:35 -- common/autotest_common.sh@926 -- # '[' -z 3576832 ']' 00:29:18.324 10:59:35 -- common/autotest_common.sh@930 -- # kill -0 3576832 00:29:18.324 10:59:35 -- common/autotest_common.sh@931 -- # uname 00:29:18.324 10:59:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:18.324 10:59:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3576832 00:29:18.581 10:59:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:18.581 10:59:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:18.581 10:59:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3576832' 00:29:18.581 killing process with pid 3576832 00:29:18.581 10:59:35 -- common/autotest_common.sh@945 -- # kill 3576832 00:29:18.581 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.581 00:29:18.581 Latency(us) 00:29:18.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.582 =================================================================================================================== 00:29:18.582 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.582 10:59:35 -- common/autotest_common.sh@950 -- # wait 3576832 00:29:18.582 10:59:35 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:29:18.582 10:59:35 -- host/digest.sh@77 -- # local rw bs qd 00:29:18.582 10:59:35 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:18.582 10:59:35 -- host/digest.sh@80 -- # rw=randwrite 00:29:18.582 10:59:35 -- host/digest.sh@80 -- # bs=4096 00:29:18.582 10:59:35 -- host/digest.sh@80 -- # qd=128 00:29:18.582 10:59:35 -- host/digest.sh@82 -- # bperfpid=3577254 00:29:18.582 10:59:35 -- host/digest.sh@83 -- # waitforlisten 3577254 /var/tmp/bperf.sock 00:29:18.582 10:59:35 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:18.582 10:59:35 -- common/autotest_common.sh@819 -- # '[' -z 3577254 ']' 00:29:18.582 10:59:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:18.582 10:59:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:18.582 10:59:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:18.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:18.582 10:59:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:18.582 10:59:35 -- common/autotest_common.sh@10 -- # set +x 00:29:18.839 [2024-07-10 10:59:35.426283] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:18.839 [2024-07-10 10:59:35.426361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577254 ] 00:29:18.839 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.840 [2024-07-10 10:59:35.483609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.840 [2024-07-10 10:59:35.569059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.840 10:59:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:18.840 10:59:35 -- common/autotest_common.sh@852 -- # return 0 00:29:18.840 10:59:35 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:18.840 10:59:35 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:18.840 10:59:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:19.405 10:59:35 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.405 10:59:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.663 nvme0n1 00:29:19.663 10:59:36 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:19.663 10:59:36 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.663 Running I/O for 2 seconds... 00:29:22.188 00:29:22.188 Latency(us) 00:29:22.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.188 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:22.188 nvme0n1 : 2.00 20648.71 80.66 0.00 0.00 6191.01 3082.62 13204.29 00:29:22.188 =================================================================================================================== 00:29:22.188 Total : 20648.71 80.66 0.00 0.00 6191.01 3082.62 13204.29 00:29:22.188 0 00:29:22.188 10:59:38 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:22.188 10:59:38 -- host/digest.sh@92 -- # get_accel_stats 00:29:22.188 10:59:38 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:22.188 10:59:38 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:22.188 | select(.opcode=="crc32c") 00:29:22.188 | "\(.module_name) \(.executed)"' 00:29:22.188 10:59:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:22.188 10:59:38 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:22.188 10:59:38 -- host/digest.sh@93 -- # exp_module=software 00:29:22.188 10:59:38 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:22.188 10:59:38 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:22.188 10:59:38 -- host/digest.sh@97 -- # killprocess 3577254 00:29:22.188 10:59:38 -- common/autotest_common.sh@926 -- # '[' -z 3577254 ']' 00:29:22.188 10:59:38 -- common/autotest_common.sh@930 -- # kill -0 3577254 00:29:22.188 10:59:38 -- common/autotest_common.sh@931 -- # uname 00:29:22.188 10:59:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:22.188 10:59:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3577254 00:29:22.188 10:59:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:22.188 10:59:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:22.188 10:59:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3577254' 00:29:22.188 killing process with pid 3577254 00:29:22.188 10:59:38 -- common/autotest_common.sh@945 -- # kill 3577254 00:29:22.188 Received shutdown signal, test time was about 2.000000 seconds 00:29:22.188 00:29:22.188 Latency(us) 00:29:22.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.188 =================================================================================================================== 00:29:22.188 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.188 10:59:38 -- common/autotest_common.sh@950 -- # wait 3577254 00:29:22.188 10:59:38 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:29:22.188 10:59:38 -- host/digest.sh@77 -- # local rw bs qd 00:29:22.188 10:59:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:22.188 10:59:38 -- host/digest.sh@80 -- # rw=randwrite 00:29:22.188 10:59:38 -- host/digest.sh@80 -- # bs=131072 00:29:22.189 10:59:38 -- host/digest.sh@80 -- # qd=16 00:29:22.189 10:59:38 -- host/digest.sh@82 -- # bperfpid=3577670 00:29:22.189 10:59:38 -- host/digest.sh@83 -- # waitforlisten 3577670 /var/tmp/bperf.sock 00:29:22.189 10:59:38 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:22.189 10:59:38 -- common/autotest_common.sh@819 -- # '[' -z 3577670 ']' 00:29:22.189 10:59:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:22.189 10:59:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:22.189 10:59:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:22.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:22.189 10:59:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:22.189 10:59:38 -- common/autotest_common.sh@10 -- # set +x 00:29:22.447 [2024-07-10 10:59:39.031294] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:22.447 [2024-07-10 10:59:39.031381] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577670 ] 00:29:22.447 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.447 Zero copy mechanism will not be used. 00:29:22.447 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.447 [2024-07-10 10:59:39.093988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.447 [2024-07-10 10:59:39.178014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.447 10:59:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:22.447 10:59:39 -- common/autotest_common.sh@852 -- # return 0 00:29:22.447 10:59:39 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:22.447 10:59:39 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:22.447 10:59:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:23.068 10:59:39 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.068 10:59:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.068 nvme0n1 00:29:23.068 10:59:39 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:23.068 10:59:39 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:23.326 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:23.326 Zero copy mechanism will not be used. 00:29:23.326 Running I/O for 2 seconds... 00:29:25.220 00:29:25.220 Latency(us) 00:29:25.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.220 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:25.220 nvme0n1 : 2.01 2524.34 315.54 0.00 0.00 6324.69 3665.16 9709.04 00:29:25.220 =================================================================================================================== 00:29:25.220 Total : 2524.34 315.54 0.00 0.00 6324.69 3665.16 9709.04 00:29:25.220 0 00:29:25.220 10:59:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:25.220 10:59:41 -- host/digest.sh@92 -- # get_accel_stats 00:29:25.220 10:59:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:25.220 10:59:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:25.220 | select(.opcode=="crc32c") 00:29:25.220 | "\(.module_name) \(.executed)"' 00:29:25.220 10:59:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:25.477 10:59:42 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:25.477 10:59:42 -- host/digest.sh@93 -- # exp_module=software 00:29:25.477 10:59:42 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:25.477 10:59:42 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:25.477 10:59:42 -- host/digest.sh@97 -- # killprocess 3577670 00:29:25.477 10:59:42 -- common/autotest_common.sh@926 -- # '[' -z 3577670 ']' 00:29:25.477 10:59:42 -- common/autotest_common.sh@930 -- # kill -0 3577670 00:29:25.477 10:59:42 -- common/autotest_common.sh@931 -- # uname 00:29:25.477 10:59:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:25.477 10:59:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3577670 00:29:25.477 10:59:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:25.477 10:59:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:25.477 10:59:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3577670' 00:29:25.477 killing process with pid 3577670 00:29:25.477 10:59:42 -- common/autotest_common.sh@945 -- # kill 3577670 00:29:25.477 Received shutdown signal, test time was about 2.000000 seconds 00:29:25.477 00:29:25.477 Latency(us) 00:29:25.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.477 =================================================================================================================== 00:29:25.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.477 10:59:42 -- common/autotest_common.sh@950 -- # wait 3577670 00:29:25.735 10:59:42 -- host/digest.sh@126 -- # killprocess 3576268 00:29:25.735 10:59:42 -- common/autotest_common.sh@926 -- # '[' -z 3576268 ']' 00:29:25.735 10:59:42 -- common/autotest_common.sh@930 -- # kill -0 3576268 00:29:25.735 10:59:42 -- common/autotest_common.sh@931 -- # uname 00:29:25.735 10:59:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:25.735 10:59:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3576268 00:29:25.735 10:59:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:25.735 10:59:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:25.735 10:59:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3576268' 00:29:25.735 killing process with pid 3576268 00:29:25.735 10:59:42 -- common/autotest_common.sh@945 -- # kill 3576268 00:29:25.735 10:59:42 -- common/autotest_common.sh@950 -- # wait 3576268 00:29:25.992 00:29:25.992 real 0m15.013s 00:29:25.992 user 0m29.025s 00:29:25.992 sys 0m4.109s 00:29:25.992 10:59:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.992 10:59:42 -- common/autotest_common.sh@10 -- # set +x 00:29:25.992 ************************************ 00:29:25.992 END TEST nvmf_digest_clean 00:29:25.992 ************************************ 00:29:25.992 10:59:42 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:29:25.992 10:59:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:25.992 10:59:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.992 10:59:42 -- common/autotest_common.sh@10 -- # set +x 00:29:25.992 ************************************ 00:29:25.992 START TEST nvmf_digest_error 00:29:25.992 ************************************ 00:29:25.992 10:59:42 -- common/autotest_common.sh@1104 -- # run_digest_error 00:29:25.992 10:59:42 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:29:25.992 10:59:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:25.992 10:59:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:25.992 10:59:42 -- common/autotest_common.sh@10 -- # set +x 00:29:25.992 10:59:42 -- nvmf/common.sh@469 -- # nvmfpid=3578166 00:29:25.993 10:59:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:25.993 10:59:42 -- nvmf/common.sh@470 -- # waitforlisten 3578166 00:29:25.993 10:59:42 -- common/autotest_common.sh@819 -- # '[' -z 3578166 ']' 00:29:25.993 10:59:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.993 10:59:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:25.993 10:59:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.993 10:59:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:25.993 10:59:42 -- common/autotest_common.sh@10 -- # set +x 00:29:26.250 [2024-07-10 10:59:42.818504] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:26.250 [2024-07-10 10:59:42.818594] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.250 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.250 [2024-07-10 10:59:42.888061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.250 [2024-07-10 10:59:42.976979] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:26.251 [2024-07-10 10:59:42.977137] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.251 [2024-07-10 10:59:42.977154] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.251 [2024-07-10 10:59:42.977167] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.251 [2024-07-10 10:59:42.977195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.251 10:59:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:26.251 10:59:43 -- common/autotest_common.sh@852 -- # return 0 00:29:26.251 10:59:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:26.251 10:59:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:26.251 10:59:43 -- common/autotest_common.sh@10 -- # set +x 00:29:26.251 10:59:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.251 10:59:43 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:26.251 10:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:26.251 10:59:43 -- common/autotest_common.sh@10 -- # set +x 00:29:26.251 [2024-07-10 10:59:43.057844] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:26.251 10:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:26.251 10:59:43 -- host/digest.sh@104 -- # common_target_config 00:29:26.251 10:59:43 -- host/digest.sh@43 -- # rpc_cmd 00:29:26.251 10:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:26.251 10:59:43 -- common/autotest_common.sh@10 -- # set +x 00:29:26.508 null0 00:29:26.508 [2024-07-10 10:59:43.179103] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.508 [2024-07-10 10:59:43.203250] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.508 10:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:26.508 10:59:43 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:29:26.508 10:59:43 -- host/digest.sh@54 -- # local rw bs qd 00:29:26.508 10:59:43 -- host/digest.sh@56 -- # rw=randread 00:29:26.508 10:59:43 -- host/digest.sh@56 -- # bs=4096 00:29:26.508 10:59:43 -- host/digest.sh@56 -- # qd=128 00:29:26.508 10:59:43 -- host/digest.sh@58 -- # bperfpid=3578261 00:29:26.508 10:59:43 -- host/digest.sh@60 -- # waitforlisten 3578261 /var/tmp/bperf.sock 00:29:26.508 10:59:43 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:26.508 10:59:43 -- common/autotest_common.sh@819 -- # '[' -z 3578261 ']' 00:29:26.508 10:59:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:26.508 10:59:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:26.508 10:59:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:26.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:26.508 10:59:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:26.508 10:59:43 -- common/autotest_common.sh@10 -- # set +x 00:29:26.508 [2024-07-10 10:59:43.248535] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:26.509 [2024-07-10 10:59:43.248600] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3578261 ] 00:29:26.509 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.509 [2024-07-10 10:59:43.310399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.767 [2024-07-10 10:59:43.401228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.700 10:59:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:27.700 10:59:44 -- common/autotest_common.sh@852 -- # return 0 00:29:27.700 10:59:44 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:27.700 10:59:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:27.700 10:59:44 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:27.700 10:59:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:27.700 10:59:44 -- common/autotest_common.sh@10 -- # set +x 00:29:27.700 10:59:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:27.700 10:59:44 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.700 10:59:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.267 nvme0n1 00:29:28.267 10:59:44 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:28.267 10:59:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.267 10:59:44 -- common/autotest_common.sh@10 -- # set +x 00:29:28.267 10:59:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.267 10:59:44 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:28.267 10:59:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:28.267 Running I/O for 2 seconds... 00:29:28.525 [2024-07-10 10:59:45.109047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.109097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.109118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.126202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.126233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.126265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.143126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.143157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.143188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.158314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.158358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.158375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.169571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.169598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.169629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.185395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.185423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.185462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.208009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.208040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.208057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.224274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.224304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.224336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.240079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.240110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.240127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.255041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.255073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.255104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.266586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.266615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.266646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.282022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.282056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.282075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.298293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.298321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.298352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.313296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.313330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.525 [2024-07-10 10:59:45.313348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.525 [2024-07-10 10:59:45.329291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.525 [2024-07-10 10:59:45.329334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.526 [2024-07-10 10:59:45.329349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.526 [2024-07-10 10:59:45.345297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.526 [2024-07-10 10:59:45.345324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.526 [2024-07-10 10:59:45.345354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.363259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.363287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.363323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.379050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.379078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.379108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.395670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.395700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.395733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.412331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.412359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.412390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.428437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.428464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.428494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.444871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.444898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.444929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.461260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.461287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.461317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.477884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.477919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.477937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.494634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.494662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.494693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.511455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.511488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.511519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.528059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.528087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.528117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.544479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.544506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.544536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.560641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.560669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.560702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.576881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.576915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.576934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.785 [2024-07-10 10:59:45.593009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:28.785 [2024-07-10 10:59:45.593042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.785 [2024-07-10 10:59:45.593061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.610065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.610094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.610126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.626873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.626906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.626924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.643468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.643496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.643528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.659983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.660011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.660041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.676272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.676299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.676329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.692767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.692794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.692810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.709072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.709100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.709131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.725135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.725163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.725194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.741195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.741222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.741253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.757565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.757595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.757627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.773692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.773735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.773751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.790015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.790043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.790079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.805604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.805633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.805665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.822031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.822059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.822088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.838724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.838751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.838782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.044 [2024-07-10 10:59:45.854318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.044 [2024-07-10 10:59:45.854347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.044 [2024-07-10 10:59:45.854363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:45.871044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.302 [2024-07-10 10:59:45.871072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.302 [2024-07-10 10:59:45.871102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:45.887013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.302 [2024-07-10 10:59:45.887041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.302 [2024-07-10 10:59:45.887072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:45.903551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.302 [2024-07-10 10:59:45.903579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.302 [2024-07-10 10:59:45.903595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:45.920468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.302 [2024-07-10 10:59:45.920510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.302 [2024-07-10 10:59:45.920526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:45.937601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.302 [2024-07-10 10:59:45.937635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.302 [2024-07-10 10:59:45.937668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:45.953853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.302 [2024-07-10 10:59:45.953887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.302 [2024-07-10 10:59:45.953906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:45.969316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.302 [2024-07-10 10:59:45.969345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.302 [2024-07-10 10:59:45.969361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:45.985240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.302 [2024-07-10 10:59:45.985270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.302 [2024-07-10 10:59:45.985301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:46.001662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.302 [2024-07-10 10:59:46.001706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.302 [2024-07-10 10:59:46.001722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:46.018392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.302 [2024-07-10 10:59:46.018421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.302 [2024-07-10 10:59:46.018462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:46.034701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.302 [2024-07-10 10:59:46.034730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.302 [2024-07-10 10:59:46.034761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.302 [2024-07-10 10:59:46.051482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.303 [2024-07-10 10:59:46.051525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.303 [2024-07-10 10:59:46.051541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.303 [2024-07-10 10:59:46.067601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.303 [2024-07-10 10:59:46.067631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.303 [2024-07-10 10:59:46.067661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.303 [2024-07-10 10:59:46.083861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.303 [2024-07-10 10:59:46.083889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.303 [2024-07-10 10:59:46.083906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.303 [2024-07-10 10:59:46.100472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.303 [2024-07-10 10:59:46.100501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.303 [2024-07-10 10:59:46.100532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.303 [2024-07-10 10:59:46.116724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.303 [2024-07-10 10:59:46.116752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.303 [2024-07-10 10:59:46.116782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.133193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.133223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.133253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.149640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.149669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.149700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.167247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.167277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.167307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.182354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.182383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.182437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.193436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.193473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.193503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.210166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.210199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.210232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.231995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.232026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.232044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.248555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.248586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.248602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.264604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.264635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.264652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.280967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.280998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.281014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.296033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.296064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.296081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.306738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.306765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.306794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.322993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.323026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.323044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.339562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.561 [2024-07-10 10:59:46.339589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.561 [2024-07-10 10:59:46.339620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.561 [2024-07-10 10:59:46.355634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.562 [2024-07-10 10:59:46.355662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.562 [2024-07-10 10:59:46.355694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.562 [2024-07-10 10:59:46.372092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.562 [2024-07-10 10:59:46.372125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.562 [2024-07-10 10:59:46.372143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.820 [2024-07-10 10:59:46.395410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.820 [2024-07-10 10:59:46.395447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.820 [2024-07-10 10:59:46.395464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.820 [2024-07-10 10:59:46.411753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.820 [2024-07-10 10:59:46.411783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.820 [2024-07-10 10:59:46.411799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.820 [2024-07-10 10:59:46.428750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.820 [2024-07-10 10:59:46.428780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.820 [2024-07-10 10:59:46.428797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.820 [2024-07-10 10:59:46.445477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.820 [2024-07-10 10:59:46.445521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.820 [2024-07-10 10:59:46.445539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.820 [2024-07-10 10:59:46.462011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.820 [2024-07-10 10:59:46.462055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.820 [2024-07-10 10:59:46.462071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.820 [2024-07-10 10:59:46.473038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.820 [2024-07-10 10:59:46.473066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.820 [2024-07-10 10:59:46.473099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.820 [2024-07-10 10:59:46.488221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.820 [2024-07-10 10:59:46.488255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.820 [2024-07-10 10:59:46.488284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.820 [2024-07-10 10:59:46.504135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.820 [2024-07-10 10:59:46.504169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.820 [2024-07-10 10:59:46.504187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.820 [2024-07-10 10:59:46.521175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.820 [2024-07-10 10:59:46.521209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.820 [2024-07-10 10:59:46.521226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.820 [2024-07-10 10:59:46.538240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.820 [2024-07-10 10:59:46.538273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.820 [2024-07-10 10:59:46.538291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.820 [2024-07-10 10:59:46.554814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.820 [2024-07-10 10:59:46.554846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.820 [2024-07-10 10:59:46.554865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.821 [2024-07-10 10:59:46.571405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.821 [2024-07-10 10:59:46.571446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.821 [2024-07-10 10:59:46.571465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.821 [2024-07-10 10:59:46.588371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.821 [2024-07-10 10:59:46.588399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.821 [2024-07-10 10:59:46.588440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.821 [2024-07-10 10:59:46.604204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.821 [2024-07-10 10:59:46.604237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.821 [2024-07-10 10:59:46.604255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.821 [2024-07-10 10:59:46.621248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.821 [2024-07-10 10:59:46.621277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.821 [2024-07-10 10:59:46.621308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.821 [2024-07-10 10:59:46.636715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:29.821 [2024-07-10 10:59:46.636768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.821 [2024-07-10 10:59:46.636787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.653679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.653706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.653736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.670451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.670479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.670510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.686471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.686504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.686522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.703552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.703595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.703611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.719529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.719557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.719589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.736265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.736300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.736318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.753185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.753227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.753242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.768479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.768522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.768537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.784417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.784455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.784472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.800559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.800588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.800619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.817393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.817433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.817453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.834078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.834111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.834130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.850286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.850319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.850337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.866645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.866673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.866703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.882871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.882904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.882922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.079 [2024-07-10 10:59:46.899011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.079 [2024-07-10 10:59:46.899044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.079 [2024-07-10 10:59:46.899064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.337 [2024-07-10 10:59:46.916118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.337 [2024-07-10 10:59:46.916151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.337 [2024-07-10 10:59:46.916176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.337 [2024-07-10 10:59:46.932327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.337 [2024-07-10 10:59:46.932361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.337 [2024-07-10 10:59:46.932379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.337 [2024-07-10 10:59:46.948540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.337 [2024-07-10 10:59:46.948571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.338 [2024-07-10 10:59:46.948587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.338 [2024-07-10 10:59:46.964958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.338 [2024-07-10 10:59:46.964988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.338 [2024-07-10 10:59:46.965005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.338 [2024-07-10 10:59:46.981906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.338 [2024-07-10 10:59:46.981935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.338 [2024-07-10 10:59:46.981952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.338 [2024-07-10 10:59:46.997489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.338 [2024-07-10 10:59:46.997519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.338 [2024-07-10 10:59:46.997535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.338 [2024-07-10 10:59:47.008511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.338 [2024-07-10 10:59:47.008537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.338 [2024-07-10 10:59:47.008566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.338 [2024-07-10 10:59:47.031430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.338 [2024-07-10 10:59:47.031476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.338 [2024-07-10 10:59:47.031493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.338 [2024-07-10 10:59:47.048195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.338 [2024-07-10 10:59:47.048224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.338 [2024-07-10 10:59:47.048256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.338 [2024-07-10 10:59:47.059033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.338 [2024-07-10 10:59:47.059070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.338 [2024-07-10 10:59:47.059089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.338 [2024-07-10 10:59:47.074746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.338 [2024-07-10 10:59:47.074778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.338 [2024-07-10 10:59:47.074796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.338 [2024-07-10 10:59:47.090676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfae80) 00:29:30.338 [2024-07-10 10:59:47.090714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.338 [2024-07-10 10:59:47.090729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.338 00:29:30.338 Latency(us) 00:29:30.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.338 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:30.338 nvme0n1 : 2.04 15356.51 59.99 0.00 0.00 8165.20 2609.30 48545.19 00:29:30.338 =================================================================================================================== 00:29:30.338 Total : 15356.51 59.99 0.00 0.00 8165.20 2609.30 48545.19 00:29:30.338 0 00:29:30.338 10:59:47 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:30.338 10:59:47 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:30.338 10:59:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:30.338 10:59:47 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:30.338 | .driver_specific 00:29:30.338 | .nvme_error 00:29:30.338 | .status_code 00:29:30.338 | .command_transient_transport_error' 00:29:30.596 10:59:47 -- host/digest.sh@71 -- # (( 123 > 0 )) 00:29:30.596 10:59:47 -- host/digest.sh@73 -- # killprocess 3578261 00:29:30.596 10:59:47 -- common/autotest_common.sh@926 -- # '[' -z 3578261 ']' 00:29:30.596 10:59:47 -- common/autotest_common.sh@930 -- # kill -0 3578261 00:29:30.596 10:59:47 -- common/autotest_common.sh@931 -- # uname 00:29:30.596 10:59:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:30.596 10:59:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3578261 00:29:30.596 10:59:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:30.596 10:59:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:30.596 10:59:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3578261' 00:29:30.596 killing process with pid 3578261 00:29:30.596 10:59:47 -- common/autotest_common.sh@945 -- # kill 3578261 00:29:30.596 Received shutdown signal, test time was about 2.000000 seconds 00:29:30.596 00:29:30.596 Latency(us) 00:29:30.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.596 =================================================================================================================== 00:29:30.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:30.596 10:59:47 -- common/autotest_common.sh@950 -- # wait 3578261 00:29:30.864 10:59:47 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:29:30.864 10:59:47 -- host/digest.sh@54 -- # local rw bs qd 00:29:30.864 10:59:47 -- host/digest.sh@56 -- # rw=randread 00:29:30.864 10:59:47 -- host/digest.sh@56 -- # bs=131072 00:29:30.864 10:59:47 -- host/digest.sh@56 -- # qd=16 00:29:30.864 10:59:47 -- host/digest.sh@58 -- # bperfpid=3578830 00:29:30.864 10:59:47 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:30.864 10:59:47 -- host/digest.sh@60 -- # waitforlisten 3578830 /var/tmp/bperf.sock 00:29:30.864 10:59:47 -- common/autotest_common.sh@819 -- # '[' -z 3578830 ']' 00:29:30.864 10:59:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:30.864 10:59:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:30.864 10:59:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:30.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:30.864 10:59:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:30.864 10:59:47 -- common/autotest_common.sh@10 -- # set +x 00:29:30.864 [2024-07-10 10:59:47.668710] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:30.864 [2024-07-10 10:59:47.668791] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3578830 ] 00:29:30.864 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:30.864 Zero copy mechanism will not be used. 00:29:31.123 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.123 [2024-07-10 10:59:47.730575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.123 [2024-07-10 10:59:47.817174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.057 10:59:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:32.057 10:59:48 -- common/autotest_common.sh@852 -- # return 0 00:29:32.057 10:59:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:32.057 10:59:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:32.057 10:59:48 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:32.057 10:59:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:32.057 10:59:48 -- common/autotest_common.sh@10 -- # set +x 00:29:32.057 10:59:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:32.057 10:59:48 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.057 10:59:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.315 nvme0n1 00:29:32.315 10:59:49 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:32.315 10:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:32.315 10:59:49 -- common/autotest_common.sh@10 -- # set +x 00:29:32.573 10:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:32.573 10:59:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:32.573 10:59:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:32.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:32.573 Zero copy mechanism will not be used. 00:29:32.573 Running I/O for 2 seconds... 00:29:32.573 [2024-07-10 10:59:49.250311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.573 [2024-07-10 10:59:49.250365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-07-10 10:59:49.250386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.573 [2024-07-10 10:59:49.260443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.573 [2024-07-10 10:59:49.260492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-07-10 10:59:49.260510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.573 [2024-07-10 10:59:49.270155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.573 [2024-07-10 10:59:49.270189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-07-10 10:59:49.270207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.573 [2024-07-10 10:59:49.280169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.573 [2024-07-10 10:59:49.280201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-07-10 10:59:49.280220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.573 [2024-07-10 10:59:49.290118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.573 [2024-07-10 10:59:49.290151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-07-10 10:59:49.290169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.573 [2024-07-10 10:59:49.300059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.573 [2024-07-10 10:59:49.300093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-07-10 10:59:49.300111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.573 [2024-07-10 10:59:49.309997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.573 [2024-07-10 10:59:49.310029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-07-10 10:59:49.310047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.573 [2024-07-10 10:59:49.319940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.573 [2024-07-10 10:59:49.319972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-07-10 10:59:49.319989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.573 [2024-07-10 10:59:49.330073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.573 [2024-07-10 10:59:49.330104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-07-10 10:59:49.330122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.573 [2024-07-10 10:59:49.339978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.573 [2024-07-10 10:59:49.340010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.573 [2024-07-10 10:59:49.340028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.573 [2024-07-10 10:59:49.350170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.574 [2024-07-10 10:59:49.350207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-07-10 10:59:49.350227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.574 [2024-07-10 10:59:49.360055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.574 [2024-07-10 10:59:49.360087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-07-10 10:59:49.360105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.574 [2024-07-10 10:59:49.370064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.574 [2024-07-10 10:59:49.370095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-07-10 10:59:49.370113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.574 [2024-07-10 10:59:49.379982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.574 [2024-07-10 10:59:49.380013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-07-10 10:59:49.380031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.574 [2024-07-10 10:59:49.390002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.574 [2024-07-10 10:59:49.390034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.574 [2024-07-10 10:59:49.390052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.832 [2024-07-10 10:59:49.400010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.832 [2024-07-10 10:59:49.400053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.832 [2024-07-10 10:59:49.400070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.832 [2024-07-10 10:59:49.410029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.832 [2024-07-10 10:59:49.410060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.832 [2024-07-10 10:59:49.410078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.832 [2024-07-10 10:59:49.419894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.832 [2024-07-10 10:59:49.419926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.832 [2024-07-10 10:59:49.419944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.832 [2024-07-10 10:59:49.429881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.832 [2024-07-10 10:59:49.429913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.832 [2024-07-10 10:59:49.429930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.832 [2024-07-10 10:59:49.440048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.832 [2024-07-10 10:59:49.440080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.832 [2024-07-10 10:59:49.440098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.832 [2024-07-10 10:59:49.449970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.832 [2024-07-10 10:59:49.450000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.832 [2024-07-10 10:59:49.450018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.459825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.459857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.459875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.469676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.469703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.469735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.479694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.479739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.479757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.489520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.489547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.489577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.499256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.499287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.499305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.509135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.509167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.509185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.519683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.519728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.519756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.529691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.529721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.529752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.539908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.539942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.539960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.549944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.549977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.549994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.560122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.560160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.560178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.570067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.570100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.570118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.580034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.580066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.580084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.589864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.589896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.589913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.599921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.599953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.599971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.610034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.610156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.610177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.620002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.620037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.620054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.629875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.629907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.629925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.640347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.640388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.640407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.833 [2024-07-10 10:59:49.650212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:32.833 [2024-07-10 10:59:49.650245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.833 [2024-07-10 10:59:49.650262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.660195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.660228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.660246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.670052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.670085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.670103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.679982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.680014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.680032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.689944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.689976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.689994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.699884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.699915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.699932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.709943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.709975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.709993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.719796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.719822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.719853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.730006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.730038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.730056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.739946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.739977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.739995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.749976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.750008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.750026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.759917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.759948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.759965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.770103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.770135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.770153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.780025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.780063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.780082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.789863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.789896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.789914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.799984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.800016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.800034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.809831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.809862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.809880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.819830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.819862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.819880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.829945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.829976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.829994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.839818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.839849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.839867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.849885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.849917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.091 [2024-07-10 10:59:49.849935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.091 [2024-07-10 10:59:49.859936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.091 [2024-07-10 10:59:49.859970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.092 [2024-07-10 10:59:49.859988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.092 [2024-07-10 10:59:49.869887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.092 [2024-07-10 10:59:49.869920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.092 [2024-07-10 10:59:49.869939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.092 [2024-07-10 10:59:49.879885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.092 [2024-07-10 10:59:49.879918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.092 [2024-07-10 10:59:49.879936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.092 [2024-07-10 10:59:49.889804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.092 [2024-07-10 10:59:49.889836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.092 [2024-07-10 10:59:49.889855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.092 [2024-07-10 10:59:49.899725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.092 [2024-07-10 10:59:49.899752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.092 [2024-07-10 10:59:49.899782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.092 [2024-07-10 10:59:49.909593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.092 [2024-07-10 10:59:49.909619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.092 [2024-07-10 10:59:49.909649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.349 [2024-07-10 10:59:49.919280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.349 [2024-07-10 10:59:49.919308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.349 [2024-07-10 10:59:49.919324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.349 [2024-07-10 10:59:49.929187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.349 [2024-07-10 10:59:49.929218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.349 [2024-07-10 10:59:49.929236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.349 [2024-07-10 10:59:49.938992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:49.939024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:49.939042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:49.948824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:49.948856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:49.948880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:49.958821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:49.958853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:49.958871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:49.968859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:49.968890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:49.968908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:49.978746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:49.978791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:49.978809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:49.988659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:49.988687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:49.988702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:49.998716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:49.998762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:49.998780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.008590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.008647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.008670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.018687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.018717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.018749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.029088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.029121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.029140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.038973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.039013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.039034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.048890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.048923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.048942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.058679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.058721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.058736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.068317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.068348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.068366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.078156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.078188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.078206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.087883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.087914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.087933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.097729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.097765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.097783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.107630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.107657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.107688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.117670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.117697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.117734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.127496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.127523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.127555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.137152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.137183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.137200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.146987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.147018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.147035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.156928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.156959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.156977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.350 [2024-07-10 10:59:50.166890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.350 [2024-07-10 10:59:50.166921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.350 [2024-07-10 10:59:50.166939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.176973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.177020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.177036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.186893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.186925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.186943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.197321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.197354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.197372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.207026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.207064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.207083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.216873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.216904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.216922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.226855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.226886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.226904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.237002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.237034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.237052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.246874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.246905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.246923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.256668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.256695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.256726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.266494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.266522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.266537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.276469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.276497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.276527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.287332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.287365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.287384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.297004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.297036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.297053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.306954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.306986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.307004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.316834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.316866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.316884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.326866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.326898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.326915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.337099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.337132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.337150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.346969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.347001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.347019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.356823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.356854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.356872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.366846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.366877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.366895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.376773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.376816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.376840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.609 [2024-07-10 10:59:50.386845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.609 [2024-07-10 10:59:50.386876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.609 [2024-07-10 10:59:50.386894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.610 [2024-07-10 10:59:50.396907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.610 [2024-07-10 10:59:50.396938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.610 [2024-07-10 10:59:50.396956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.610 [2024-07-10 10:59:50.406866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.610 [2024-07-10 10:59:50.406897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.610 [2024-07-10 10:59:50.406915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.610 [2024-07-10 10:59:50.416864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.610 [2024-07-10 10:59:50.416895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.610 [2024-07-10 10:59:50.416913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.610 [2024-07-10 10:59:50.426734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.610 [2024-07-10 10:59:50.426776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.610 [2024-07-10 10:59:50.426790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.868 [2024-07-10 10:59:50.436512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.868 [2024-07-10 10:59:50.436540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.868 [2024-07-10 10:59:50.436556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.868 [2024-07-10 10:59:50.446318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.868 [2024-07-10 10:59:50.446350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.868 [2024-07-10 10:59:50.446367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.868 [2024-07-10 10:59:50.455976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.868 [2024-07-10 10:59:50.456007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.868 [2024-07-10 10:59:50.456025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.868 [2024-07-10 10:59:50.465735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.868 [2024-07-10 10:59:50.465779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.868 [2024-07-10 10:59:50.465797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.868 [2024-07-10 10:59:50.475574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.868 [2024-07-10 10:59:50.475600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.868 [2024-07-10 10:59:50.475631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.868 [2024-07-10 10:59:50.485635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.868 [2024-07-10 10:59:50.485661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.485692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.495884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.495916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.495934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.506416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.506474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.506490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.516170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.516203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.516221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.525882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.525913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.525931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.535399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.535445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.535479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.545959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.545991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.546015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.556422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.556479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.556497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.566197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.566228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.566247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.575993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.576025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.576043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.585894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.585926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.585944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.595980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.596012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.596030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.605840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.605871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.605889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.615907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.615938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.615956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.625876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.625908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.625926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.635775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.635812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.635831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.645619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.645646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.645676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.655690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.655727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.655743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.666076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.666109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.666127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.675878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.675909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.675927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.869 [2024-07-10 10:59:50.685808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:33.869 [2024-07-10 10:59:50.685840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.869 [2024-07-10 10:59:50.685858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.695185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.695217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.695235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.704423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.704477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.704493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.713837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.713870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.713888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.723879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.723910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.723928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.734017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.734048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.734066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.743986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.744018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.744035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.753941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.753974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.753992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.763701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.763745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.763763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.773521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.773547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.773577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.783256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.783287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.783306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.793070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.793112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.793129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.803246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.803279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.803303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.813162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.813194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.813211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.823043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.823074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.823092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.832927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.832958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.128 [2024-07-10 10:59:50.832975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.128 [2024-07-10 10:59:50.842742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.128 [2024-07-10 10:59:50.842787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.842805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.129 [2024-07-10 10:59:50.852772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.129 [2024-07-10 10:59:50.852803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.852820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.129 [2024-07-10 10:59:50.862559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.129 [2024-07-10 10:59:50.862586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.862616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.129 [2024-07-10 10:59:50.872206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.129 [2024-07-10 10:59:50.872237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.872255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.129 [2024-07-10 10:59:50.882046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.129 [2024-07-10 10:59:50.882077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.882095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.129 [2024-07-10 10:59:50.891908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.129 [2024-07-10 10:59:50.891939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.891957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.129 [2024-07-10 10:59:50.901889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.129 [2024-07-10 10:59:50.901920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.901938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.129 [2024-07-10 10:59:50.911863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.129 [2024-07-10 10:59:50.911895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.911912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.129 [2024-07-10 10:59:50.921870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.129 [2024-07-10 10:59:50.921901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.921919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.129 [2024-07-10 10:59:50.931865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.129 [2024-07-10 10:59:50.931896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.931913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.129 [2024-07-10 10:59:50.941813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.129 [2024-07-10 10:59:50.941845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.941862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.129 [2024-07-10 10:59:50.951682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.129 [2024-07-10 10:59:50.951725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.129 [2024-07-10 10:59:50.951743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.387 [2024-07-10 10:59:50.961723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.387 [2024-07-10 10:59:50.961767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-10 10:59:50.961785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.387 [2024-07-10 10:59:50.971740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.387 [2024-07-10 10:59:50.971772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.387 [2024-07-10 10:59:50.971796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.387 [2024-07-10 10:59:50.981616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.387 [2024-07-10 10:59:50.981644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:50.981675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:50.991307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:50.991338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:50.991355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.000999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.001030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.001047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.010993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.011024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.011041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.020907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.020938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.020956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.031083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.031114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.031132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.041023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.041055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.041072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.050920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.050952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.050970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.061321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.061359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.061378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.071011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.071043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.071061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.080878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.080910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.080928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.090702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.090744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.090762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.100560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.100587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.100617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.110294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.110325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.110343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.120230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.120261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.120278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.130202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.130234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.130252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.140007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.140038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.140056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.149989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.150020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.150037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.159984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.160014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.160032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.170442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.170487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.170503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.180178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.180210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.180228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.190037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.190068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.190086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.199966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.199997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.200015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.388 [2024-07-10 10:59:51.210414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.388 [2024-07-10 10:59:51.210454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.388 [2024-07-10 10:59:51.210488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.647 [2024-07-10 10:59:51.220235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.647 [2024-07-10 10:59:51.220268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.647 [2024-07-10 10:59:51.220286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.647 [2024-07-10 10:59:51.230088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.647 [2024-07-10 10:59:51.230120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.647 [2024-07-10 10:59:51.230144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.647 [2024-07-10 10:59:51.239982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1202d30) 00:29:34.647 [2024-07-10 10:59:51.240013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.647 [2024-07-10 10:59:51.240031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.647 00:29:34.647 Latency(us) 00:29:34.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.647 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:34.647 nvme0n1 : 2.00 3116.07 389.51 0.00 0.00 5130.68 4514.70 10825.58 00:29:34.647 =================================================================================================================== 00:29:34.647 Total : 3116.07 389.51 0.00 0.00 5130.68 4514.70 10825.58 00:29:34.647 0 00:29:34.647 10:59:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:34.647 10:59:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:34.647 10:59:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:34.647 | .driver_specific 00:29:34.647 | .nvme_error 00:29:34.647 | .status_code 00:29:34.647 | .command_transient_transport_error' 00:29:34.647 10:59:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:34.906 10:59:51 -- host/digest.sh@71 -- # (( 201 > 0 )) 00:29:34.906 10:59:51 -- host/digest.sh@73 -- # killprocess 3578830 00:29:34.906 10:59:51 -- common/autotest_common.sh@926 -- # '[' -z 3578830 ']' 00:29:34.906 10:59:51 -- common/autotest_common.sh@930 -- # kill -0 3578830 00:29:34.906 10:59:51 -- common/autotest_common.sh@931 -- # uname 00:29:34.906 10:59:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:34.906 10:59:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3578830 00:29:34.906 10:59:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:34.906 10:59:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:34.906 10:59:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3578830' 00:29:34.906 killing process with pid 3578830 00:29:34.906 10:59:51 -- common/autotest_common.sh@945 -- # kill 3578830 00:29:34.906 Received shutdown signal, test time was about 2.000000 seconds 00:29:34.906 00:29:34.906 Latency(us) 00:29:34.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.906 =================================================================================================================== 00:29:34.906 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.906 10:59:51 -- common/autotest_common.sh@950 -- # wait 3578830 00:29:35.164 10:59:51 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:29:35.164 10:59:51 -- host/digest.sh@54 -- # local rw bs qd 00:29:35.164 10:59:51 -- host/digest.sh@56 -- # rw=randwrite 00:29:35.164 10:59:51 -- host/digest.sh@56 -- # bs=4096 00:29:35.164 10:59:51 -- host/digest.sh@56 -- # qd=128 00:29:35.164 10:59:51 -- host/digest.sh@58 -- # bperfpid=3579261 00:29:35.164 10:59:51 -- host/digest.sh@60 -- # waitforlisten 3579261 /var/tmp/bperf.sock 00:29:35.164 10:59:51 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:35.164 10:59:51 -- common/autotest_common.sh@819 -- # '[' -z 3579261 ']' 00:29:35.164 10:59:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:35.164 10:59:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:35.164 10:59:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:35.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:35.164 10:59:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:35.164 10:59:51 -- common/autotest_common.sh@10 -- # set +x 00:29:35.165 [2024-07-10 10:59:51.791751] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:35.165 [2024-07-10 10:59:51.791837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579261 ] 00:29:35.165 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.165 [2024-07-10 10:59:51.855976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.165 [2024-07-10 10:59:51.946212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.098 10:59:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:36.098 10:59:52 -- common/autotest_common.sh@852 -- # return 0 00:29:36.098 10:59:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:36.098 10:59:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:36.368 10:59:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:36.369 10:59:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.369 10:59:52 -- common/autotest_common.sh@10 -- # set +x 00:29:36.369 10:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.369 10:59:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.369 10:59:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.629 nvme0n1 00:29:36.888 10:59:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:36.888 10:59:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.888 10:59:53 -- common/autotest_common.sh@10 -- # set +x 00:29:36.888 10:59:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.888 10:59:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:36.888 10:59:53 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:36.888 Running I/O for 2 seconds... 00:29:36.888 [2024-07-10 10:59:53.577903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ee5c8 00:29:36.888 [2024-07-10 10:59:53.579004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.888 [2024-07-10 10:59:53.579046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:36.888 [2024-07-10 10:59:53.591359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eee38 00:29:36.888 [2024-07-10 10:59:53.591872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.888 [2024-07-10 10:59:53.591905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:36.888 [2024-07-10 10:59:53.604132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ef6a8 00:29:36.888 [2024-07-10 10:59:53.604812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.888 [2024-07-10 10:59:53.604846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:36.888 [2024-07-10 10:59:53.616585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ee190 00:29:36.888 [2024-07-10 10:59:53.617185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.888 [2024-07-10 10:59:53.617224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:36.888 [2024-07-10 10:59:53.629086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e9168 00:29:36.888 [2024-07-10 10:59:53.629628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.888 [2024-07-10 10:59:53.629671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:36.888 [2024-07-10 10:59:53.641483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eaab8 00:29:36.888 [2024-07-10 10:59:53.641976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.888 [2024-07-10 10:59:53.642008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:36.888 [2024-07-10 10:59:53.653880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f31b8 00:29:36.888 [2024-07-10 10:59:53.654348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.888 [2024-07-10 10:59:53.654381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:36.888 [2024-07-10 10:59:53.666137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f3e60 00:29:36.888 [2024-07-10 10:59:53.666671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.888 [2024-07-10 10:59:53.666699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:36.888 [2024-07-10 10:59:53.678410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f31b8 00:29:36.888 [2024-07-10 10:59:53.678974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.888 [2024-07-10 10:59:53.679006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.888 [2024-07-10 10:59:53.690820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eaab8 00:29:36.888 [2024-07-10 10:59:53.691499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.888 [2024-07-10 10:59:53.691529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:36.888 [2024-07-10 10:59:53.703329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e9168 00:29:36.888 [2024-07-10 10:59:53.703867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.889 [2024-07-10 10:59:53.703899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.716188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ebfd0 00:29:37.147 [2024-07-10 10:59:53.716510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.716539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.728482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e3498 00:29:37.147 [2024-07-10 10:59:53.728904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.728935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.740853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:37.147 [2024-07-10 10:59:53.741217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.741248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.753116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eaab8 00:29:37.147 [2024-07-10 10:59:53.753518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.753546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.768040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8088 00:29:37.147 [2024-07-10 10:59:53.769016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.769048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.778808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e0630 00:29:37.147 [2024-07-10 10:59:53.780260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.780292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.791104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e0630 00:29:37.147 [2024-07-10 10:59:53.792664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.792691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.802151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190fa3a0 00:29:37.147 [2024-07-10 10:59:53.802820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.802850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.815239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e3498 00:29:37.147 [2024-07-10 10:59:53.816594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.816623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.827781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e5a90 00:29:37.147 [2024-07-10 10:59:53.828917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.828948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.840096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e5a90 00:29:37.147 [2024-07-10 10:59:53.841275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.841308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.852702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e5a90 00:29:37.147 [2024-07-10 10:59:53.853899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.853931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.865124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e5a90 00:29:37.147 [2024-07-10 10:59:53.866332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.866363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.877546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e5a90 00:29:37.147 [2024-07-10 10:59:53.878794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.878826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.889893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e5a90 00:29:37.147 [2024-07-10 10:59:53.891134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.891165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.902007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ef6a8 00:29:37.147 [2024-07-10 10:59:53.902909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.902940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.914835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f20d8 00:29:37.147 [2024-07-10 10:59:53.915978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.916009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:37.147 [2024-07-10 10:59:53.927399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f2948 00:29:37.147 [2024-07-10 10:59:53.928680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.147 [2024-07-10 10:59:53.928721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:37.148 [2024-07-10 10:59:53.939805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ed4e8 00:29:37.148 [2024-07-10 10:59:53.940980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.148 [2024-07-10 10:59:53.941017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:37.148 [2024-07-10 10:59:53.952202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ed4e8 00:29:37.148 [2024-07-10 10:59:53.953404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.148 [2024-07-10 10:59:53.953445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:37.148 [2024-07-10 10:59:53.964596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ed4e8 00:29:37.148 [2024-07-10 10:59:53.965853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.148 [2024-07-10 10:59:53.965886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:53.977558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ed4e8 00:29:37.406 [2024-07-10 10:59:53.978836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:53.978869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:53.989711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eee38 00:29:37.406 [2024-07-10 10:59:53.990589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:53.990617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.002253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f0350 00:29:37.406 [2024-07-10 10:59:54.003354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.003386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.014749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f3a28 00:29:37.406 [2024-07-10 10:59:54.015870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.015902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.027018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6300 00:29:37.406 [2024-07-10 10:59:54.028200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.028232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.039674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e9168 00:29:37.406 [2024-07-10 10:59:54.040664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.040693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.052185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e9168 00:29:37.406 [2024-07-10 10:59:54.053393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.053435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.064645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e9168 00:29:37.406 [2024-07-10 10:59:54.065887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.065919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.077083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e9168 00:29:37.406 [2024-07-10 10:59:54.078326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.078358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.089331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eaef0 00:29:37.406 [2024-07-10 10:59:54.090397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.090439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.102229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e9168 00:29:37.406 [2024-07-10 10:59:54.103095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.103126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.115090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.406 [2024-07-10 10:59:54.116142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.116174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.127631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.406 [2024-07-10 10:59:54.128702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.128730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.140166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.406 [2024-07-10 10:59:54.141239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.141271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.152656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.406 [2024-07-10 10:59:54.153803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.153835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.165269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.406 [2024-07-10 10:59:54.166389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.406 [2024-07-10 10:59:54.166420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.406 [2024-07-10 10:59:54.177918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.406 [2024-07-10 10:59:54.179062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.407 [2024-07-10 10:59:54.179093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.407 [2024-07-10 10:59:54.190527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.407 [2024-07-10 10:59:54.191671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.407 [2024-07-10 10:59:54.191699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:37.407 [2024-07-10 10:59:54.203133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.407 [2024-07-10 10:59:54.204272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.407 [2024-07-10 10:59:54.204304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:37.407 [2024-07-10 10:59:54.215693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.407 [2024-07-10 10:59:54.216896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.407 [2024-07-10 10:59:54.216928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:37.407 [2024-07-10 10:59:54.228529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.407 [2024-07-10 10:59:54.229715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.407 [2024-07-10 10:59:54.229743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:37.665 [2024-07-10 10:59:54.241471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.665 [2024-07-10 10:59:54.242669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.665 [2024-07-10 10:59:54.242716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:37.665 [2024-07-10 10:59:54.254108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.665 [2024-07-10 10:59:54.255303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.665 [2024-07-10 10:59:54.255334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.665 [2024-07-10 10:59:54.266686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.665 [2024-07-10 10:59:54.267942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.665 [2024-07-10 10:59:54.267977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:37.665 [2024-07-10 10:59:54.279304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.665 [2024-07-10 10:59:54.280549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.665 [2024-07-10 10:59:54.280576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:37.665 [2024-07-10 10:59:54.291913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.665 [2024-07-10 10:59:54.293167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.665 [2024-07-10 10:59:54.293198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:37.665 [2024-07-10 10:59:54.304543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.665 [2024-07-10 10:59:54.305801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.665 [2024-07-10 10:59:54.305832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:37.665 [2024-07-10 10:59:54.317153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.665 [2024-07-10 10:59:54.318422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.665 [2024-07-10 10:59:54.318473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:37.665 [2024-07-10 10:59:54.329750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.665 [2024-07-10 10:59:54.331082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.665 [2024-07-10 10:59:54.331113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:37.665 [2024-07-10 10:59:54.342324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.665 [2024-07-10 10:59:54.343655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.665 [2024-07-10 10:59:54.343684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:37.665 [2024-07-10 10:59:54.354820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.665 [2024-07-10 10:59:54.356147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.665 [2024-07-10 10:59:54.356179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:37.665 [2024-07-10 10:59:54.366737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f9f68 00:29:37.665 [2024-07-10 10:59:54.367065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.665 [2024-07-10 10:59:54.367096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:37.666 [2024-07-10 10:59:54.379546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f8e88 00:29:37.666 [2024-07-10 10:59:54.380591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.666 [2024-07-10 10:59:54.380619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:37.666 [2024-07-10 10:59:54.392040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e95a0 00:29:37.666 [2024-07-10 10:59:54.393089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.666 [2024-07-10 10:59:54.393119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:37.666 [2024-07-10 10:59:54.404568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.666 [2024-07-10 10:59:54.405629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.666 [2024-07-10 10:59:54.405657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:37.666 [2024-07-10 10:59:54.417117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.666 [2024-07-10 10:59:54.418184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.666 [2024-07-10 10:59:54.418215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:37.666 [2024-07-10 10:59:54.429642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.666 [2024-07-10 10:59:54.430760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.666 [2024-07-10 10:59:54.430792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:37.666 [2024-07-10 10:59:54.442254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.666 [2024-07-10 10:59:54.443372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.666 [2024-07-10 10:59:54.443403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:37.666 [2024-07-10 10:59:54.454835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.666 [2024-07-10 10:59:54.455992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.666 [2024-07-10 10:59:54.456023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.666 [2024-07-10 10:59:54.467393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.666 [2024-07-10 10:59:54.468552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.666 [2024-07-10 10:59:54.468595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.666 [2024-07-10 10:59:54.479964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.666 [2024-07-10 10:59:54.481113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.666 [2024-07-10 10:59:54.481145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.493059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.494201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.494232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.505648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.506802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.506833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.518176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.519325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.519356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.530650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.531872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.531903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.543236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.544559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.544585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.555839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.557086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.557117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.568408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.569672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.569723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.580991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.582238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.582269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.593436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.594665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.594698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.605897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.607193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.607224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.618475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.619789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.619820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.631101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e1f80 00:29:37.924 [2024-07-10 10:59:54.632380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.632411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.642939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:37.924 [2024-07-10 10:59:54.644077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.644108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.655575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ee190 00:29:37.924 [2024-07-10 10:59:54.656384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.656414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.668252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ebb98 00:29:37.924 [2024-07-10 10:59:54.669380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.669411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.680801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:37.924 [2024-07-10 10:59:54.681900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.681931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.693395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:37.924 [2024-07-10 10:59:54.694479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.694507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.705898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:37.924 [2024-07-10 10:59:54.707023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.707054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.718594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:37.924 [2024-07-10 10:59:54.719686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.719731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.731172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:37.924 [2024-07-10 10:59:54.732265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.732296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:37.924 [2024-07-10 10:59:54.743672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:37.924 [2024-07-10 10:59:54.744825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.924 [2024-07-10 10:59:54.744856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.756747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.757907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.757938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.769359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.770644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.770671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.781967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.783134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.783165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.794521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.795671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.795714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.807121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.808284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.808316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.819666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.820889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.820920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.832247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.833576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.833602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.844842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.846118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.846152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.857402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.858653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.858680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.870226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.871522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.871548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.882766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.884094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.884125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:38.182 [2024-07-10 10:59:54.895374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.182 [2024-07-10 10:59:54.896669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.182 [2024-07-10 10:59:54.896696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:38.183 [2024-07-10 10:59:54.907869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190eea00 00:29:38.183 [2024-07-10 10:59:54.909194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.183 [2024-07-10 10:59:54.909225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:38.183 [2024-07-10 10:59:54.919672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f4b08 00:29:38.183 [2024-07-10 10:59:54.919994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.183 [2024-07-10 10:59:54.920030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:38.183 [2024-07-10 10:59:54.932577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190edd58 00:29:38.183 [2024-07-10 10:59:54.933586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.183 [2024-07-10 10:59:54.933614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:38.183 [2024-07-10 10:59:54.945072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f8618 00:29:38.183 [2024-07-10 10:59:54.946088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.183 [2024-07-10 10:59:54.946119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:38.183 [2024-07-10 10:59:54.957569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f6890 00:29:38.183 [2024-07-10 10:59:54.958609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.183 [2024-07-10 10:59:54.958637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:38.183 [2024-07-10 10:59:54.970128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.183 [2024-07-10 10:59:54.971170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.183 [2024-07-10 10:59:54.971201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:38.183 [2024-07-10 10:59:54.982688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.183 [2024-07-10 10:59:54.983783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.183 [2024-07-10 10:59:54.983814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:38.183 [2024-07-10 10:59:54.995289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.183 [2024-07-10 10:59:54.996486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.183 [2024-07-10 10:59:54.996513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.008250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.009369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.009400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.021089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.022190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.022221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.033629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.034762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.034794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.046221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.047402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.047439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.058761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.059939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.059969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.071366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.072550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.072591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.084000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.085183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.085213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.096620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.097839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.097871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.109243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.110487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.110515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.121893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.123138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.123170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.134545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.135757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.135800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.147194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.148417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.148456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.159790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.161070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.161102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.172379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.173615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.173643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.184874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f1430 00:29:38.442 [2024-07-10 10:59:55.186177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.186208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.196746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e9e10 00:29:38.442 [2024-07-10 10:59:55.197058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.197089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.209289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f7538 00:29:38.442 [2024-07-10 10:59:55.210261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.210293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.221824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f0ff8 00:29:38.442 [2024-07-10 10:59:55.222831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.222862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:38.442 [2024-07-10 10:59:55.234397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190f6458 00:29:38.442 [2024-07-10 10:59:55.235392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.442 [2024-07-10 10:59:55.235434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:38.443 [2024-07-10 10:59:55.246919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.443 [2024-07-10 10:59:55.248006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.443 [2024-07-10 10:59:55.248043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:38.443 [2024-07-10 10:59:55.259543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.443 [2024-07-10 10:59:55.260592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.443 [2024-07-10 10:59:55.260621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:38.701 [2024-07-10 10:59:55.272664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.701 [2024-07-10 10:59:55.273727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.701 [2024-07-10 10:59:55.273773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:38.701 [2024-07-10 10:59:55.285308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.701 [2024-07-10 10:59:55.286421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.701 [2024-07-10 10:59:55.286475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:38.701 [2024-07-10 10:59:55.297934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.701 [2024-07-10 10:59:55.299068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.701 [2024-07-10 10:59:55.299099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:38.701 [2024-07-10 10:59:55.310539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.701 [2024-07-10 10:59:55.311663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.701 [2024-07-10 10:59:55.311689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:38.701 [2024-07-10 10:59:55.323131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.701 [2024-07-10 10:59:55.324243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.701 [2024-07-10 10:59:55.324274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.701 [2024-07-10 10:59:55.335707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.701 [2024-07-10 10:59:55.336895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.701 [2024-07-10 10:59:55.336926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.701 [2024-07-10 10:59:55.348313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.701 [2024-07-10 10:59:55.349518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.701 [2024-07-10 10:59:55.349547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:38.701 [2024-07-10 10:59:55.360851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.701 [2024-07-10 10:59:55.362070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.701 [2024-07-10 10:59:55.362102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:38.701 [2024-07-10 10:59:55.373406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.701 [2024-07-10 10:59:55.374609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.701 [2024-07-10 10:59:55.374635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:38.701 [2024-07-10 10:59:55.385997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.701 [2024-07-10 10:59:55.387201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.701 [2024-07-10 10:59:55.387231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:38.702 [2024-07-10 10:59:55.398549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.702 [2024-07-10 10:59:55.399771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.702 [2024-07-10 10:59:55.399813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:38.702 [2024-07-10 10:59:55.411166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.702 [2024-07-10 10:59:55.412346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.702 [2024-07-10 10:59:55.412377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:38.702 [2024-07-10 10:59:55.423722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.702 [2024-07-10 10:59:55.424965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.702 [2024-07-10 10:59:55.424995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:38.702 [2024-07-10 10:59:55.436345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.702 [2024-07-10 10:59:55.437620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.702 [2024-07-10 10:59:55.437646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:38.702 [2024-07-10 10:59:55.448952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e6738 00:29:38.702 [2024-07-10 10:59:55.450211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.702 [2024-07-10 10:59:55.450241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:38.702 [2024-07-10 10:59:55.460751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e4140 00:29:38.702 [2024-07-10 10:59:55.461026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.702 [2024-07-10 10:59:55.461057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:38.702 [2024-07-10 10:59:55.473632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190edd58 00:29:38.702 [2024-07-10 10:59:55.474625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.702 [2024-07-10 10:59:55.474653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:38.702 [2024-07-10 10:59:55.486114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e8d30 00:29:38.702 [2024-07-10 10:59:55.487113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.702 [2024-07-10 10:59:55.487144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:38.702 [2024-07-10 10:59:55.498653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190e38d0 00:29:38.702 [2024-07-10 10:59:55.499680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.702 [2024-07-10 10:59:55.499724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:38.702 [2024-07-10 10:59:55.511271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190ec408 00:29:38.702 [2024-07-10 10:59:55.512335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.702 [2024-07-10 10:59:55.512365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:38.702 [2024-07-10 10:59:55.523978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190edd58 00:29:38.702 [2024-07-10 10:59:55.525053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.702 [2024-07-10 10:59:55.525083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:38.960 [2024-07-10 10:59:55.536810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190edd58 00:29:38.960 [2024-07-10 10:59:55.537899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.960 [2024-07-10 10:59:55.537930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:38.960 [2024-07-10 10:59:55.549400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190edd58 00:29:38.960 [2024-07-10 10:59:55.550489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.960 [2024-07-10 10:59:55.550517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:38.960 [2024-07-10 10:59:55.562004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126a780) with pdu=0x2000190edd58 00:29:38.960 [2024-07-10 10:59:55.563100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.960 [2024-07-10 10:59:55.563130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:38.960 00:29:38.960 Latency(us) 00:29:38.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.960 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.960 nvme0n1 : 2.00 20333.25 79.43 0.00 0.00 6287.29 2864.17 12039.21 00:29:38.960 =================================================================================================================== 00:29:38.960 Total : 20333.25 79.43 0.00 0.00 6287.29 2864.17 12039.21 00:29:38.960 0 00:29:38.960 10:59:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:38.960 10:59:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:38.960 10:59:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:38.960 10:59:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:38.960 | .driver_specific 00:29:38.960 | .nvme_error 00:29:38.960 | .status_code 00:29:38.960 | .command_transient_transport_error' 00:29:39.218 10:59:55 -- host/digest.sh@71 -- # (( 159 > 0 )) 00:29:39.218 10:59:55 -- host/digest.sh@73 -- # killprocess 3579261 00:29:39.218 10:59:55 -- common/autotest_common.sh@926 -- # '[' -z 3579261 ']' 00:29:39.218 10:59:55 -- common/autotest_common.sh@930 -- # kill -0 3579261 00:29:39.218 10:59:55 -- common/autotest_common.sh@931 -- # uname 00:29:39.218 10:59:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:39.218 10:59:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3579261 00:29:39.218 10:59:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:39.218 10:59:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:39.218 10:59:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3579261' 00:29:39.218 killing process with pid 3579261 00:29:39.218 10:59:55 -- common/autotest_common.sh@945 -- # kill 3579261 00:29:39.218 Received shutdown signal, test time was about 2.000000 seconds 00:29:39.218 00:29:39.218 Latency(us) 00:29:39.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.218 =================================================================================================================== 00:29:39.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.218 10:59:55 -- common/autotest_common.sh@950 -- # wait 3579261 00:29:39.476 10:59:56 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:39.476 10:59:56 -- host/digest.sh@54 -- # local rw bs qd 00:29:39.476 10:59:56 -- host/digest.sh@56 -- # rw=randwrite 00:29:39.476 10:59:56 -- host/digest.sh@56 -- # bs=131072 00:29:39.476 10:59:56 -- host/digest.sh@56 -- # qd=16 00:29:39.476 10:59:56 -- host/digest.sh@58 -- # bperfpid=3579810 00:29:39.476 10:59:56 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:39.476 10:59:56 -- host/digest.sh@60 -- # waitforlisten 3579810 /var/tmp/bperf.sock 00:29:39.476 10:59:56 -- common/autotest_common.sh@819 -- # '[' -z 3579810 ']' 00:29:39.476 10:59:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:39.476 10:59:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:39.476 10:59:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:39.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:39.476 10:59:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:39.476 10:59:56 -- common/autotest_common.sh@10 -- # set +x 00:29:39.476 [2024-07-10 10:59:56.149293] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:39.476 [2024-07-10 10:59:56.149376] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579810 ] 00:29:39.476 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:39.476 Zero copy mechanism will not be used. 00:29:39.476 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.476 [2024-07-10 10:59:56.217485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.754 [2024-07-10 10:59:56.302313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.359 10:59:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:40.359 10:59:57 -- common/autotest_common.sh@852 -- # return 0 00:29:40.359 10:59:57 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:40.359 10:59:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:40.924 10:59:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:40.924 10:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.924 10:59:57 -- common/autotest_common.sh@10 -- # set +x 00:29:40.924 10:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.924 10:59:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:40.924 10:59:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.183 nvme0n1 00:29:41.183 10:59:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:41.183 10:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.183 10:59:57 -- common/autotest_common.sh@10 -- # set +x 00:29:41.183 10:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.183 10:59:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:41.183 10:59:57 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:41.183 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:41.183 Zero copy mechanism will not be used. 00:29:41.183 Running I/O for 2 seconds... 00:29:41.183 [2024-07-10 10:59:57.891689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.183 [2024-07-10 10:59:57.892001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.183 [2024-07-10 10:59:57.892037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.183 [2024-07-10 10:59:57.903173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.183 [2024-07-10 10:59:57.903530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.183 [2024-07-10 10:59:57.903574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.183 [2024-07-10 10:59:57.916135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.183 [2024-07-10 10:59:57.916466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.183 [2024-07-10 10:59:57.916494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.183 [2024-07-10 10:59:57.928971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.183 [2024-07-10 10:59:57.929205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.183 [2024-07-10 10:59:57.929233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.183 [2024-07-10 10:59:57.941263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.183 [2024-07-10 10:59:57.941588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.183 [2024-07-10 10:59:57.941630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.183 [2024-07-10 10:59:57.954284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.183 [2024-07-10 10:59:57.954677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.183 [2024-07-10 10:59:57.954705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.183 [2024-07-10 10:59:57.967337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.183 [2024-07-10 10:59:57.967625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.183 [2024-07-10 10:59:57.967653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.183 [2024-07-10 10:59:57.980582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.183 [2024-07-10 10:59:57.980921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.183 [2024-07-10 10:59:57.980949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.183 [2024-07-10 10:59:57.993307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.183 [2024-07-10 10:59:57.993680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.183 [2024-07-10 10:59:57.993723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.183 [2024-07-10 10:59:58.006361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.184 [2024-07-10 10:59:58.006651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.184 [2024-07-10 10:59:58.006679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.442 [2024-07-10 10:59:58.018401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.442 [2024-07-10 10:59:58.018654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.442 [2024-07-10 10:59:58.018681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.442 [2024-07-10 10:59:58.030639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.442 [2024-07-10 10:59:58.030832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.442 [2024-07-10 10:59:58.030859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.442 [2024-07-10 10:59:58.043094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.442 [2024-07-10 10:59:58.043396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.442 [2024-07-10 10:59:58.043422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.442 [2024-07-10 10:59:58.056074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.442 [2024-07-10 10:59:58.056342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.442 [2024-07-10 10:59:58.056370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.442 [2024-07-10 10:59:58.069324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.442 [2024-07-10 10:59:58.069608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.442 [2024-07-10 10:59:58.069636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.442 [2024-07-10 10:59:58.082291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.442 [2024-07-10 10:59:58.082638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.442 [2024-07-10 10:59:58.082666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.442 [2024-07-10 10:59:58.095468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.442 [2024-07-10 10:59:58.095747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.442 [2024-07-10 10:59:58.095773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.442 [2024-07-10 10:59:58.107873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.442 [2024-07-10 10:59:58.108244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.442 [2024-07-10 10:59:58.108271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.442 [2024-07-10 10:59:58.120720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.442 [2024-07-10 10:59:58.121093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.442 [2024-07-10 10:59:58.121120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.442 [2024-07-10 10:59:58.133134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.443 [2024-07-10 10:59:58.133594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.443 [2024-07-10 10:59:58.133622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.443 [2024-07-10 10:59:58.145274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.443 [2024-07-10 10:59:58.145603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.443 [2024-07-10 10:59:58.145632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.443 [2024-07-10 10:59:58.158486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.443 [2024-07-10 10:59:58.158831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.443 [2024-07-10 10:59:58.158874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.443 [2024-07-10 10:59:58.171987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.443 [2024-07-10 10:59:58.172316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.443 [2024-07-10 10:59:58.172348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.443 [2024-07-10 10:59:58.184120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.443 [2024-07-10 10:59:58.184519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.443 [2024-07-10 10:59:58.184549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.443 [2024-07-10 10:59:58.196816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.443 [2024-07-10 10:59:58.197218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.443 [2024-07-10 10:59:58.197246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.443 [2024-07-10 10:59:58.209443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.443 [2024-07-10 10:59:58.209768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.443 [2024-07-10 10:59:58.209797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.443 [2024-07-10 10:59:58.222677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.443 [2024-07-10 10:59:58.222863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.443 [2024-07-10 10:59:58.222890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.443 [2024-07-10 10:59:58.235483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.443 [2024-07-10 10:59:58.235796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.443 [2024-07-10 10:59:58.235823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.443 [2024-07-10 10:59:58.248673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.443 [2024-07-10 10:59:58.249002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.443 [2024-07-10 10:59:58.249029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.443 [2024-07-10 10:59:58.262017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.443 [2024-07-10 10:59:58.262387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.443 [2024-07-10 10:59:58.262415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.701 [2024-07-10 10:59:58.274352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.701 [2024-07-10 10:59:58.274765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.701 [2024-07-10 10:59:58.274793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.701 [2024-07-10 10:59:58.286786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.701 [2024-07-10 10:59:58.287095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.701 [2024-07-10 10:59:58.287122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.701 [2024-07-10 10:59:58.299397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.701 [2024-07-10 10:59:58.299800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.701 [2024-07-10 10:59:58.299828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.701 [2024-07-10 10:59:58.310872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.311182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.311210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.323985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.324320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.324347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.336003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.336452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.336479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.348693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.349004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.349031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.362006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.362269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.362294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.373927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.374208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.374235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.386797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.387090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.387118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.399602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.400150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.400179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.412800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.413187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.413215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.425281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.425616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.425644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.438242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.438558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.438585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.450866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.451218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.451259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.464091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.464412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.464445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.477336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.477635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.477663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.489747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.489979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.490005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.502502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.502869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.502904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.702 [2024-07-10 10:59:58.515452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.702 [2024-07-10 10:59:58.515847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.702 [2024-07-10 10:59:58.515874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.528275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.528688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.528730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.541010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.541444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.541472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.554624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.554845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.554872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.567592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.567901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.567927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.580530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.580768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.580795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.594082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.594504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.594531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.606493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.606844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.606881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.618338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.618768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.618811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.630740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.631178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.631206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.643673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.644012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.644039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.657310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.657765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.657794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.671041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.671341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.671369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.684981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.685257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.685285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.961 [2024-07-10 10:59:58.698472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.961 [2024-07-10 10:59:58.698754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.961 [2024-07-10 10:59:58.698781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.962 [2024-07-10 10:59:58.712207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.962 [2024-07-10 10:59:58.712841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.962 [2024-07-10 10:59:58.712869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.962 [2024-07-10 10:59:58.725025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.962 [2024-07-10 10:59:58.725440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.962 [2024-07-10 10:59:58.725483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.962 [2024-07-10 10:59:58.737807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.962 [2024-07-10 10:59:58.738069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.962 [2024-07-10 10:59:58.738095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.962 [2024-07-10 10:59:58.750862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.962 [2024-07-10 10:59:58.751164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.962 [2024-07-10 10:59:58.751192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.962 [2024-07-10 10:59:58.762467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.962 [2024-07-10 10:59:58.762798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.962 [2024-07-10 10:59:58.762825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.962 [2024-07-10 10:59:58.776178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:41.962 [2024-07-10 10:59:58.776601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.962 [2024-07-10 10:59:58.776628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.221 [2024-07-10 10:59:58.788931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.221 [2024-07-10 10:59:58.789189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.221 [2024-07-10 10:59:58.789217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.221 [2024-07-10 10:59:58.802162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.221 [2024-07-10 10:59:58.802535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.221 [2024-07-10 10:59:58.802562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.221 [2024-07-10 10:59:58.815018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.221 [2024-07-10 10:59:58.815476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.221 [2024-07-10 10:59:58.815503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.221 [2024-07-10 10:59:58.827489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.221 [2024-07-10 10:59:58.827819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.221 [2024-07-10 10:59:58.827846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.221 [2024-07-10 10:59:58.839635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.221 [2024-07-10 10:59:58.839994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.221 [2024-07-10 10:59:58.840022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.221 [2024-07-10 10:59:58.853016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.221 [2024-07-10 10:59:58.853397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.221 [2024-07-10 10:59:58.853430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.221 [2024-07-10 10:59:58.865781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.221 [2024-07-10 10:59:58.866118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.221 [2024-07-10 10:59:58.866144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.221 [2024-07-10 10:59:58.878207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:58.878621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:58.878649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:58.890918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:58.891231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:58.891258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:58.903806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:58.904212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:58.904239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:58.916029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:58.916463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:58.916492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:58.928791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:58.929072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:58.929100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:58.941789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:58.942085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:58.942113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:58.954888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:58.955132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:58.955158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:58.966471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:58.966780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:58.966806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:58.979196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:58.979492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:58.979517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:58.991413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:58.991757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:58.991784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:59.004491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:59.004840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:59.004867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:59.017340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:59.017836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:59.017863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:59.029870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:59.030250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:59.030277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.222 [2024-07-10 10:59:59.042776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.222 [2024-07-10 10:59:59.043003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.222 [2024-07-10 10:59:59.043029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.054802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.055050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.055082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.067372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.067663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.067690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.079934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.080271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.080299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.091953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.092330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.092357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.105641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.105874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.105900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.117387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.117713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.117741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.129857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.130122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.130164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.142660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.143029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.143056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.155987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.156305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.156332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.169432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.169699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.169727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.182886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.183184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.183212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.195537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.195872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.195899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.208021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.208437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.208464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.220744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.221099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.221126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.233467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.233925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.233951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.247016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.247200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.247226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.259246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.259676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.259703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.272084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.272464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.272490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.284504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.284868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.284894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.481 [2024-07-10 10:59:59.297313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.481 [2024-07-10 10:59:59.297609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.481 [2024-07-10 10:59:59.297636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.309359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.309668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.309695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.321827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.322281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.322307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.335052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.335352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.335380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.347507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.347872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.347899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.360010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.360333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.360359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.373684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.373938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.373964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.386247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.386394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.386433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.398242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.398517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.398544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.410414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.410731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.410758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.423523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.423944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.423973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.436553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.436904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.436932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.450047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.450385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.450412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.463135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.463444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.463472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.475624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.475868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.475909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.488923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.489310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.489337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.740 [2024-07-10 10:59:59.501632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.740 [2024-07-10 10:59:59.501968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.740 [2024-07-10 10:59:59.501995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.741 [2024-07-10 10:59:59.514034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.741 [2024-07-10 10:59:59.514303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.741 [2024-07-10 10:59:59.514330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.741 [2024-07-10 10:59:59.527255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.741 [2024-07-10 10:59:59.527581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.741 [2024-07-10 10:59:59.527623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.741 [2024-07-10 10:59:59.539722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.741 [2024-07-10 10:59:59.539957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.741 [2024-07-10 10:59:59.539984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.741 [2024-07-10 10:59:59.552209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.741 [2024-07-10 10:59:59.552608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.741 [2024-07-10 10:59:59.552636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.999 [2024-07-10 10:59:59.565735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.999 [2024-07-10 10:59:59.566117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.999 [2024-07-10 10:59:59.566144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.999 [2024-07-10 10:59:59.578848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.999 [2024-07-10 10:59:59.579210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.999 [2024-07-10 10:59:59.579237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.999 [2024-07-10 10:59:59.592715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.999 [2024-07-10 10:59:59.593054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.999 [2024-07-10 10:59:59.593081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.999 [2024-07-10 10:59:59.605616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.999 [2024-07-10 10:59:59.605862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.999 [2024-07-10 10:59:59.605889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.999 [2024-07-10 10:59:59.619056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.999 [2024-07-10 10:59:59.619295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.999 [2024-07-10 10:59:59.619322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.999 [2024-07-10 10:59:59.632420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.999 [2024-07-10 10:59:59.633016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.999 [2024-07-10 10:59:59.633043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.999 [2024-07-10 10:59:59.646505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.999 [2024-07-10 10:59:59.646984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.999 [2024-07-10 10:59:59.647012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.999 [2024-07-10 10:59:59.659290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:42.999 [2024-07-10 10:59:59.659538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.999 [2024-07-10 10:59:59.659567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.999 [2024-07-10 10:59:59.671457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.671763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.671804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.000 [2024-07-10 10:59:59.684254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.684634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.684662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.000 [2024-07-10 10:59:59.697111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.697549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.697578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.000 [2024-07-10 10:59:59.712104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.712505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.712533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.000 [2024-07-10 10:59:59.726036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.726465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.726499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.000 [2024-07-10 10:59:59.739608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.740014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.740042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.000 [2024-07-10 10:59:59.752855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.753120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.753154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.000 [2024-07-10 10:59:59.765712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.766028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.766058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.000 [2024-07-10 10:59:59.779252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.779590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.779618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.000 [2024-07-10 10:59:59.792668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.793052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.793080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.000 [2024-07-10 10:59:59.805983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.806313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.806340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.000 [2024-07-10 10:59:59.820410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.000 [2024-07-10 10:59:59.820635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.000 [2024-07-10 10:59:59.820663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.260 [2024-07-10 10:59:59.831696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.260 [2024-07-10 10:59:59.832000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.260 [2024-07-10 10:59:59.832034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.260 [2024-07-10 10:59:59.845262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.260 [2024-07-10 10:59:59.845745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.260 [2024-07-10 10:59:59.845781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.260 [2024-07-10 10:59:59.858251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.260 [2024-07-10 10:59:59.858631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.260 [2024-07-10 10:59:59.858658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.260 [2024-07-10 10:59:59.870821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.260 [2024-07-10 10:59:59.871102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.260 [2024-07-10 10:59:59.871129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.260 [2024-07-10 10:59:59.883531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x126aa50) with pdu=0x2000190fef90 00:29:43.260 [2024-07-10 10:59:59.883890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.260 [2024-07-10 10:59:59.883916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.260 00:29:43.260 Latency(us) 00:29:43.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.260 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:43.260 nvme0n1 : 2.01 2412.51 301.56 0.00 0.00 6614.86 4975.88 14854.83 00:29:43.260 =================================================================================================================== 00:29:43.260 Total : 2412.51 301.56 0.00 0.00 6614.86 4975.88 14854.83 00:29:43.260 0 00:29:43.260 10:59:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:43.260 10:59:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:43.260 10:59:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:43.260 10:59:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:43.260 | .driver_specific 00:29:43.260 | .nvme_error 00:29:43.260 | .status_code 00:29:43.260 | .command_transient_transport_error' 00:29:43.517 11:00:00 -- host/digest.sh@71 -- # (( 156 > 0 )) 00:29:43.517 11:00:00 -- host/digest.sh@73 -- # killprocess 3579810 00:29:43.517 11:00:00 -- common/autotest_common.sh@926 -- # '[' -z 3579810 ']' 00:29:43.517 11:00:00 -- common/autotest_common.sh@930 -- # kill -0 3579810 00:29:43.517 11:00:00 -- common/autotest_common.sh@931 -- # uname 00:29:43.517 11:00:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:43.517 11:00:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3579810 00:29:43.517 11:00:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:43.517 11:00:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:43.517 11:00:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3579810' 00:29:43.517 killing process with pid 3579810 00:29:43.517 11:00:00 -- common/autotest_common.sh@945 -- # kill 3579810 00:29:43.517 Received shutdown signal, test time was about 2.000000 seconds 00:29:43.517 00:29:43.517 Latency(us) 00:29:43.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.517 =================================================================================================================== 00:29:43.517 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:43.517 11:00:00 -- common/autotest_common.sh@950 -- # wait 3579810 00:29:43.775 11:00:00 -- host/digest.sh@115 -- # killprocess 3578166 00:29:43.775 11:00:00 -- common/autotest_common.sh@926 -- # '[' -z 3578166 ']' 00:29:43.775 11:00:00 -- common/autotest_common.sh@930 -- # kill -0 3578166 00:29:43.775 11:00:00 -- common/autotest_common.sh@931 -- # uname 00:29:43.775 11:00:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:43.775 11:00:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3578166 00:29:43.775 11:00:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:43.775 11:00:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:43.775 11:00:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3578166' 00:29:43.775 killing process with pid 3578166 00:29:43.775 11:00:00 -- common/autotest_common.sh@945 -- # kill 3578166 00:29:43.775 11:00:00 -- common/autotest_common.sh@950 -- # wait 3578166 00:29:44.033 00:29:44.033 real 0m17.905s 00:29:44.033 user 0m36.485s 00:29:44.033 sys 0m4.264s 00:29:44.033 11:00:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.033 11:00:00 -- common/autotest_common.sh@10 -- # set +x 00:29:44.033 ************************************ 00:29:44.033 END TEST nvmf_digest_error 00:29:44.033 ************************************ 00:29:44.033 11:00:00 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:44.033 11:00:00 -- host/digest.sh@139 -- # nvmftestfini 00:29:44.033 11:00:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:44.033 11:00:00 -- nvmf/common.sh@116 -- # sync 00:29:44.033 11:00:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:44.033 11:00:00 -- nvmf/common.sh@119 -- # set +e 00:29:44.033 11:00:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:44.033 11:00:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:44.033 rmmod nvme_tcp 00:29:44.033 rmmod nvme_fabrics 00:29:44.033 rmmod nvme_keyring 00:29:44.033 11:00:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:44.033 11:00:00 -- nvmf/common.sh@123 -- # set -e 00:29:44.033 11:00:00 -- nvmf/common.sh@124 -- # return 0 00:29:44.033 11:00:00 -- nvmf/common.sh@477 -- # '[' -n 3578166 ']' 00:29:44.033 11:00:00 -- nvmf/common.sh@478 -- # killprocess 3578166 00:29:44.033 11:00:00 -- common/autotest_common.sh@926 -- # '[' -z 3578166 ']' 00:29:44.033 11:00:00 -- common/autotest_common.sh@930 -- # kill -0 3578166 00:29:44.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3578166) - No such process 00:29:44.033 11:00:00 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3578166 is not found' 00:29:44.033 Process with pid 3578166 is not found 00:29:44.033 11:00:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:44.033 11:00:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:44.033 11:00:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:44.033 11:00:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:44.033 11:00:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:44.033 11:00:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.033 11:00:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.033 11:00:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.568 11:00:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:46.568 00:29:46.568 real 0m37.153s 00:29:46.568 user 1m6.274s 00:29:46.568 sys 0m9.834s 00:29:46.568 11:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.568 11:00:02 -- common/autotest_common.sh@10 -- # set +x 00:29:46.568 ************************************ 00:29:46.568 END TEST nvmf_digest 00:29:46.568 ************************************ 00:29:46.568 11:00:02 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:46.568 11:00:02 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:46.568 11:00:02 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:46.568 11:00:02 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:46.568 11:00:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:46.568 11:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:46.568 11:00:02 -- common/autotest_common.sh@10 -- # set +x 00:29:46.568 ************************************ 00:29:46.568 START TEST nvmf_bdevperf 00:29:46.568 ************************************ 00:29:46.568 11:00:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:46.568 * Looking for test storage... 00:29:46.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:46.568 11:00:02 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.568 11:00:02 -- nvmf/common.sh@7 -- # uname -s 00:29:46.568 11:00:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.568 11:00:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.568 11:00:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.568 11:00:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.568 11:00:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.568 11:00:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.568 11:00:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.569 11:00:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.569 11:00:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.569 11:00:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.569 11:00:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:46.569 11:00:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:46.569 11:00:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.569 11:00:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.569 11:00:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.569 11:00:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.569 11:00:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.569 11:00:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.569 11:00:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.569 11:00:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.569 11:00:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.569 11:00:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.569 11:00:02 -- paths/export.sh@5 -- # export PATH 00:29:46.569 11:00:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.569 11:00:02 -- nvmf/common.sh@46 -- # : 0 00:29:46.569 11:00:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:46.569 11:00:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:46.569 11:00:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:46.569 11:00:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.569 11:00:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.569 11:00:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:46.569 11:00:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:46.569 11:00:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:46.569 11:00:02 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:46.569 11:00:02 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:46.569 11:00:02 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:46.569 11:00:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:46.569 11:00:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.569 11:00:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:46.569 11:00:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:46.569 11:00:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:46.569 11:00:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.569 11:00:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.569 11:00:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.569 11:00:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:46.569 11:00:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:46.569 11:00:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:46.569 11:00:02 -- common/autotest_common.sh@10 -- # set +x 00:29:48.465 11:00:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:48.465 11:00:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:48.465 11:00:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:48.465 11:00:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:48.465 11:00:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:48.465 11:00:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:48.465 11:00:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:48.465 11:00:04 -- nvmf/common.sh@294 -- # net_devs=() 00:29:48.465 11:00:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:48.465 11:00:04 -- nvmf/common.sh@295 -- # e810=() 00:29:48.465 11:00:04 -- nvmf/common.sh@295 -- # local -ga e810 00:29:48.465 11:00:04 -- nvmf/common.sh@296 -- # x722=() 00:29:48.465 11:00:04 -- nvmf/common.sh@296 -- # local -ga x722 00:29:48.465 11:00:04 -- nvmf/common.sh@297 -- # mlx=() 00:29:48.465 11:00:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:48.465 11:00:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.465 11:00:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.465 11:00:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.465 11:00:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.465 11:00:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.465 11:00:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.465 11:00:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.465 11:00:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.465 11:00:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.465 11:00:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.465 11:00:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.465 11:00:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:48.465 11:00:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:48.465 11:00:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:48.465 11:00:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:48.465 11:00:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:48.465 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:48.465 11:00:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:48.465 11:00:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:48.465 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:48.465 11:00:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:48.465 11:00:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:48.465 11:00:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.465 11:00:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:48.465 11:00:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.465 11:00:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:48.465 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:48.465 11:00:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.465 11:00:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:48.465 11:00:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.465 11:00:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:48.465 11:00:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.465 11:00:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:48.465 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:48.465 11:00:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.465 11:00:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:48.465 11:00:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:48.465 11:00:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:48.465 11:00:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:48.465 11:00:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.465 11:00:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.465 11:00:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.465 11:00:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:48.465 11:00:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.465 11:00:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.465 11:00:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:48.465 11:00:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.465 11:00:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.465 11:00:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:48.465 11:00:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:48.465 11:00:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.465 11:00:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.465 11:00:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.465 11:00:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.465 11:00:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:48.465 11:00:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.465 11:00:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.465 11:00:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.465 11:00:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:48.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:29:48.465 00:29:48.466 --- 10.0.0.2 ping statistics --- 00:29:48.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.466 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:29:48.466 11:00:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:29:48.466 00:29:48.466 --- 10.0.0.1 ping statistics --- 00:29:48.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.466 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:29:48.466 11:00:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.466 11:00:05 -- nvmf/common.sh@410 -- # return 0 00:29:48.466 11:00:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:48.466 11:00:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.466 11:00:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:48.466 11:00:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:48.466 11:00:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.466 11:00:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:48.466 11:00:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:48.466 11:00:05 -- host/bdevperf.sh@25 -- # tgt_init 00:29:48.466 11:00:05 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:48.466 11:00:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:48.466 11:00:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:48.466 11:00:05 -- common/autotest_common.sh@10 -- # set +x 00:29:48.466 11:00:05 -- nvmf/common.sh@469 -- # nvmfpid=3582432 00:29:48.466 11:00:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:48.466 11:00:05 -- nvmf/common.sh@470 -- # waitforlisten 3582432 00:29:48.466 11:00:05 -- common/autotest_common.sh@819 -- # '[' -z 3582432 ']' 00:29:48.466 11:00:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.466 11:00:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:48.466 11:00:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.466 11:00:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:48.466 11:00:05 -- common/autotest_common.sh@10 -- # set +x 00:29:48.466 [2024-07-10 11:00:05.151230] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:48.466 [2024-07-10 11:00:05.151317] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.466 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.466 [2024-07-10 11:00:05.220632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:48.723 [2024-07-10 11:00:05.310411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:48.723 [2024-07-10 11:00:05.310576] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.723 [2024-07-10 11:00:05.310595] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.723 [2024-07-10 11:00:05.310610] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.723 [2024-07-10 11:00:05.310691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.723 [2024-07-10 11:00:05.310923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.723 [2024-07-10 11:00:05.310927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.286 11:00:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:49.286 11:00:06 -- common/autotest_common.sh@852 -- # return 0 00:29:49.286 11:00:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:49.286 11:00:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:49.286 11:00:06 -- common/autotest_common.sh@10 -- # set +x 00:29:49.286 11:00:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.286 11:00:06 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.286 11:00:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.286 11:00:06 -- common/autotest_common.sh@10 -- # set +x 00:29:49.286 [2024-07-10 11:00:06.100945] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.286 11:00:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.286 11:00:06 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:49.286 11:00:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.286 11:00:06 -- common/autotest_common.sh@10 -- # set +x 00:29:49.544 Malloc0 00:29:49.544 11:00:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.544 11:00:06 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.544 11:00:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.544 11:00:06 -- common/autotest_common.sh@10 -- # set +x 00:29:49.544 11:00:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.544 11:00:06 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:49.544 11:00:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.544 11:00:06 -- common/autotest_common.sh@10 -- # set +x 00:29:49.544 11:00:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.544 11:00:06 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.544 11:00:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:49.544 11:00:06 -- common/autotest_common.sh@10 -- # set +x 00:29:49.544 [2024-07-10 11:00:06.157177] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.544 11:00:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.544 11:00:06 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:49.544 11:00:06 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:49.544 11:00:06 -- nvmf/common.sh@520 -- # config=() 00:29:49.544 11:00:06 -- nvmf/common.sh@520 -- # local subsystem config 00:29:49.544 11:00:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:49.544 11:00:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:49.544 { 00:29:49.544 "params": { 00:29:49.544 "name": "Nvme$subsystem", 00:29:49.544 "trtype": "$TEST_TRANSPORT", 00:29:49.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.544 "adrfam": "ipv4", 00:29:49.544 "trsvcid": "$NVMF_PORT", 00:29:49.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.544 "hdgst": ${hdgst:-false}, 00:29:49.544 "ddgst": ${ddgst:-false} 00:29:49.544 }, 00:29:49.544 "method": "bdev_nvme_attach_controller" 00:29:49.544 } 00:29:49.544 EOF 00:29:49.544 )") 00:29:49.544 11:00:06 -- nvmf/common.sh@542 -- # cat 00:29:49.544 11:00:06 -- nvmf/common.sh@544 -- # jq . 00:29:49.544 11:00:06 -- nvmf/common.sh@545 -- # IFS=, 00:29:49.544 11:00:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:49.544 "params": { 00:29:49.544 "name": "Nvme1", 00:29:49.544 "trtype": "tcp", 00:29:49.544 "traddr": "10.0.0.2", 00:29:49.544 "adrfam": "ipv4", 00:29:49.544 "trsvcid": "4420", 00:29:49.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:49.544 "hdgst": false, 00:29:49.544 "ddgst": false 00:29:49.544 }, 00:29:49.544 "method": "bdev_nvme_attach_controller" 00:29:49.544 }' 00:29:49.544 [2024-07-10 11:00:06.199309] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:49.544 [2024-07-10 11:00:06.199389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582590 ] 00:29:49.544 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.544 [2024-07-10 11:00:06.260109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.544 [2024-07-10 11:00:06.344558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.802 Running I/O for 1 seconds... 00:29:50.733 00:29:50.733 Latency(us) 00:29:50.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.733 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:50.733 Verification LBA range: start 0x0 length 0x4000 00:29:50.733 Nvme1n1 : 1.01 11777.19 46.00 0.00 0.00 10815.59 1480.63 17864.63 00:29:50.733 =================================================================================================================== 00:29:50.733 Total : 11777.19 46.00 0.00 0.00 10815.59 1480.63 17864.63 00:29:50.990 11:00:07 -- host/bdevperf.sh@30 -- # bdevperfpid=3583039 00:29:50.990 11:00:07 -- host/bdevperf.sh@32 -- # sleep 3 00:29:50.990 11:00:07 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:50.990 11:00:07 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:50.990 11:00:07 -- nvmf/common.sh@520 -- # config=() 00:29:50.990 11:00:07 -- nvmf/common.sh@520 -- # local subsystem config 00:29:50.990 11:00:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:50.990 11:00:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:50.990 { 00:29:50.990 "params": { 00:29:50.990 "name": "Nvme$subsystem", 00:29:50.990 "trtype": "$TEST_TRANSPORT", 00:29:50.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.990 "adrfam": "ipv4", 00:29:50.990 "trsvcid": "$NVMF_PORT", 00:29:50.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.990 "hdgst": ${hdgst:-false}, 00:29:50.990 "ddgst": ${ddgst:-false} 00:29:50.990 }, 00:29:50.990 "method": "bdev_nvme_attach_controller" 00:29:50.990 } 00:29:50.990 EOF 00:29:50.990 )") 00:29:50.990 11:00:07 -- nvmf/common.sh@542 -- # cat 00:29:50.990 11:00:07 -- nvmf/common.sh@544 -- # jq . 00:29:50.990 11:00:07 -- nvmf/common.sh@545 -- # IFS=, 00:29:50.990 11:00:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:50.990 "params": { 00:29:50.990 "name": "Nvme1", 00:29:50.990 "trtype": "tcp", 00:29:50.990 "traddr": "10.0.0.2", 00:29:50.990 "adrfam": "ipv4", 00:29:50.990 "trsvcid": "4420", 00:29:50.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.990 "hdgst": false, 00:29:50.990 "ddgst": false 00:29:50.990 }, 00:29:50.990 "method": "bdev_nvme_attach_controller" 00:29:50.990 }' 00:29:50.990 [2024-07-10 11:00:07.767511] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:50.990 [2024-07-10 11:00:07.767601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583039 ] 00:29:50.990 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.247 [2024-07-10 11:00:07.829360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.247 [2024-07-10 11:00:07.914769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.503 Running I/O for 15 seconds... 00:29:54.027 11:00:10 -- host/bdevperf.sh@33 -- # kill -9 3582432 00:29:54.027 11:00:10 -- host/bdevperf.sh@35 -- # sleep 3 00:29:54.027 [2024-07-10 11:00:10.742509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.742976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.742995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.027 [2024-07-10 11:00:10.743457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.027 [2024-07-10 11:00:10.743489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.027 [2024-07-10 11:00:10.743503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.743532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.743561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.743589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.743618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.743647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.743675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.743726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.743758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.743806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.743838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.743869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.743901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.743935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.743968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.743985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.744164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.744232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.744297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.744394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.744513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.744603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.028 [2024-07-10 11:00:10.744641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.028 [2024-07-10 11:00:10.744685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.028 [2024-07-10 11:00:10.744699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.744732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.744748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.744765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.744780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.744797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.744813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.744830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.744845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.744862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.744878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.744895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.744910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.744927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.744942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.744959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.744974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.744991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.745006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.745340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.745373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.745508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.745566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.745594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.745680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.745726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.745793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.745858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.029 [2024-07-10 11:00:10.745892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.029 [2024-07-10 11:00:10.745912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.029 [2024-07-10 11:00:10.745928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.745945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.030 [2024-07-10 11:00:10.745960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.745977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.745991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.030 [2024-07-10 11:00:10.746054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.030 [2024-07-10 11:00:10.746117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.030 [2024-07-10 11:00:10.746180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.030 [2024-07-10 11:00:10.746213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.030 [2024-07-10 11:00:10.746245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.030 [2024-07-10 11:00:10.746309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.030 [2024-07-10 11:00:10.746378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:54.030 [2024-07-10 11:00:10.746619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.030 [2024-07-10 11:00:10.746854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0f450 is same with the state(5) to be set 00:29:54.030 [2024-07-10 11:00:10.746889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:54.030 [2024-07-10 11:00:10.746901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:54.030 [2024-07-10 11:00:10.746914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:8 PRP1 0x0 PRP2 0x0 00:29:54.030 [2024-07-10 11:00:10.746929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.746994] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd0f450 was disconnected and freed. reset controller. 00:29:54.030 [2024-07-10 11:00:10.747073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.030 [2024-07-10 11:00:10.747096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.030 [2024-07-10 11:00:10.747113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.031 [2024-07-10 11:00:10.747142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.031 [2024-07-10 11:00:10.747155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.031 [2024-07-10 11:00:10.747168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.031 [2024-07-10 11:00:10.747181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.031 [2024-07-10 11:00:10.747210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.031 [2024-07-10 11:00:10.747222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.031 [2024-07-10 11:00:10.749534] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.031 [2024-07-10 11:00:10.749572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.031 [2024-07-10 11:00:10.750186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.750389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.750413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.031 [2024-07-10 11:00:10.750437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.031 [2024-07-10 11:00:10.750610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.031 [2024-07-10 11:00:10.750744] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.031 [2024-07-10 11:00:10.750766] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.031 [2024-07-10 11:00:10.750784] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.031 [2024-07-10 11:00:10.753405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.031 [2024-07-10 11:00:10.762311] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.031 [2024-07-10 11:00:10.762709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.762851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.762877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.031 [2024-07-10 11:00:10.762910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.031 [2024-07-10 11:00:10.763075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.031 [2024-07-10 11:00:10.763227] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.031 [2024-07-10 11:00:10.763251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.031 [2024-07-10 11:00:10.763267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.031 [2024-07-10 11:00:10.765714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.031 [2024-07-10 11:00:10.774783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.031 [2024-07-10 11:00:10.775091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.775364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.775412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.031 [2024-07-10 11:00:10.775439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.031 [2024-07-10 11:00:10.775589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.031 [2024-07-10 11:00:10.775813] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.031 [2024-07-10 11:00:10.775837] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.031 [2024-07-10 11:00:10.775852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.031 [2024-07-10 11:00:10.778218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.031 [2024-07-10 11:00:10.787213] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.031 [2024-07-10 11:00:10.787558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.787714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.787744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.031 [2024-07-10 11:00:10.787762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.031 [2024-07-10 11:00:10.787981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.031 [2024-07-10 11:00:10.788175] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.031 [2024-07-10 11:00:10.788198] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.031 [2024-07-10 11:00:10.788214] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.031 [2024-07-10 11:00:10.790356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.031 [2024-07-10 11:00:10.799873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.031 [2024-07-10 11:00:10.800226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.800438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.800467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.031 [2024-07-10 11:00:10.800485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.031 [2024-07-10 11:00:10.800632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.031 [2024-07-10 11:00:10.800783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.031 [2024-07-10 11:00:10.800806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.031 [2024-07-10 11:00:10.800822] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.031 [2024-07-10 11:00:10.803195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.031 [2024-07-10 11:00:10.812526] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.031 [2024-07-10 11:00:10.812831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.813025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.813070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.031 [2024-07-10 11:00:10.813088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.031 [2024-07-10 11:00:10.813234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.031 [2024-07-10 11:00:10.813385] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.031 [2024-07-10 11:00:10.813408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.031 [2024-07-10 11:00:10.813435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.031 [2024-07-10 11:00:10.815774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.031 [2024-07-10 11:00:10.825091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.031 [2024-07-10 11:00:10.825410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.825597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.825625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.031 [2024-07-10 11:00:10.825642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.031 [2024-07-10 11:00:10.825843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.031 [2024-07-10 11:00:10.826012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.031 [2024-07-10 11:00:10.826041] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.031 [2024-07-10 11:00:10.826057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.031 [2024-07-10 11:00:10.828412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.031 [2024-07-10 11:00:10.837482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.031 [2024-07-10 11:00:10.837818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.838010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.031 [2024-07-10 11:00:10.838036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.031 [2024-07-10 11:00:10.838052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.032 [2024-07-10 11:00:10.838270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.032 [2024-07-10 11:00:10.838449] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.032 [2024-07-10 11:00:10.838473] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.032 [2024-07-10 11:00:10.838488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.032 [2024-07-10 11:00:10.840821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.291 [2024-07-10 11:00:10.850097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.291 [2024-07-10 11:00:10.850479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.850669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.850699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.291 [2024-07-10 11:00:10.850718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.291 [2024-07-10 11:00:10.850862] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.291 [2024-07-10 11:00:10.851015] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.291 [2024-07-10 11:00:10.851039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.291 [2024-07-10 11:00:10.851055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.291 [2024-07-10 11:00:10.853335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.291 [2024-07-10 11:00:10.862914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.291 [2024-07-10 11:00:10.863350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.863556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.863586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.291 [2024-07-10 11:00:10.863603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.291 [2024-07-10 11:00:10.863768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.291 [2024-07-10 11:00:10.863937] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.291 [2024-07-10 11:00:10.863961] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.291 [2024-07-10 11:00:10.863982] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.291 [2024-07-10 11:00:10.866265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.291 [2024-07-10 11:00:10.875603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.291 [2024-07-10 11:00:10.875940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.876103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.876128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.291 [2024-07-10 11:00:10.876144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.291 [2024-07-10 11:00:10.876319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.291 [2024-07-10 11:00:10.876523] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.291 [2024-07-10 11:00:10.876547] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.291 [2024-07-10 11:00:10.876563] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.291 [2024-07-10 11:00:10.878898] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.291 [2024-07-10 11:00:10.888252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.291 [2024-07-10 11:00:10.888649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.888849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.888876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.291 [2024-07-10 11:00:10.888893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.291 [2024-07-10 11:00:10.889076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.291 [2024-07-10 11:00:10.889209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.291 [2024-07-10 11:00:10.889233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.291 [2024-07-10 11:00:10.889248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.291 [2024-07-10 11:00:10.891613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.291 [2024-07-10 11:00:10.900629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.291 [2024-07-10 11:00:10.900983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.901131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.901159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.291 [2024-07-10 11:00:10.901176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.291 [2024-07-10 11:00:10.901359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.291 [2024-07-10 11:00:10.901539] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.291 [2024-07-10 11:00:10.901564] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.291 [2024-07-10 11:00:10.901579] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.291 [2024-07-10 11:00:10.903974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.291 [2024-07-10 11:00:10.913304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.291 [2024-07-10 11:00:10.913647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.913805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-10 11:00:10.913833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.291 [2024-07-10 11:00:10.913850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.292 [2024-07-10 11:00:10.913961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.292 [2024-07-10 11:00:10.914147] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.292 [2024-07-10 11:00:10.914170] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.292 [2024-07-10 11:00:10.914186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.292 [2024-07-10 11:00:10.916460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.292 [2024-07-10 11:00:10.926089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.292 [2024-07-10 11:00:10.926401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.926562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.926588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.292 [2024-07-10 11:00:10.926603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.292 [2024-07-10 11:00:10.926827] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.292 [2024-07-10 11:00:10.927029] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.292 [2024-07-10 11:00:10.927052] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.292 [2024-07-10 11:00:10.927068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.292 [2024-07-10 11:00:10.929430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.292 [2024-07-10 11:00:10.938679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.292 [2024-07-10 11:00:10.939073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.939254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.939293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.292 [2024-07-10 11:00:10.939309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.292 [2024-07-10 11:00:10.939498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.292 [2024-07-10 11:00:10.939659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.292 [2024-07-10 11:00:10.939683] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.292 [2024-07-10 11:00:10.939699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.292 [2024-07-10 11:00:10.942178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.292 [2024-07-10 11:00:10.951510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.292 [2024-07-10 11:00:10.951945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.952143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.952189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.292 [2024-07-10 11:00:10.952207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.292 [2024-07-10 11:00:10.952355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.292 [2024-07-10 11:00:10.952537] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.292 [2024-07-10 11:00:10.952562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.292 [2024-07-10 11:00:10.952578] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.292 [2024-07-10 11:00:10.954896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.292 [2024-07-10 11:00:10.964167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.292 [2024-07-10 11:00:10.964547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.964777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.964828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.292 [2024-07-10 11:00:10.964846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.292 [2024-07-10 11:00:10.965048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.292 [2024-07-10 11:00:10.965198] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.292 [2024-07-10 11:00:10.965221] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.292 [2024-07-10 11:00:10.965237] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.292 [2024-07-10 11:00:10.967565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.292 [2024-07-10 11:00:10.976854] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.292 [2024-07-10 11:00:10.977267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.977467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.977493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.292 [2024-07-10 11:00:10.977508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.292 [2024-07-10 11:00:10.977696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.292 [2024-07-10 11:00:10.977908] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.292 [2024-07-10 11:00:10.977932] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.292 [2024-07-10 11:00:10.977948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.292 [2024-07-10 11:00:10.980356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.292 [2024-07-10 11:00:10.989345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.292 [2024-07-10 11:00:10.989664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.990009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:10.990066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.292 [2024-07-10 11:00:10.990083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.292 [2024-07-10 11:00:10.990230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.292 [2024-07-10 11:00:10.990417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.292 [2024-07-10 11:00:10.990454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.292 [2024-07-10 11:00:10.990470] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.292 [2024-07-10 11:00:10.992884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.292 [2024-07-10 11:00:11.002039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.292 [2024-07-10 11:00:11.002414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:11.002599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:11.002623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.292 [2024-07-10 11:00:11.002638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.292 [2024-07-10 11:00:11.002817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.292 [2024-07-10 11:00:11.002987] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.292 [2024-07-10 11:00:11.003010] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.292 [2024-07-10 11:00:11.003026] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.292 [2024-07-10 11:00:11.005439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.292 [2024-07-10 11:00:11.014699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.292 [2024-07-10 11:00:11.015174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:11.015394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-10 11:00:11.015422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.292 [2024-07-10 11:00:11.015451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.292 [2024-07-10 11:00:11.015617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.292 [2024-07-10 11:00:11.015731] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.293 [2024-07-10 11:00:11.015754] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.293 [2024-07-10 11:00:11.015770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.293 [2024-07-10 11:00:11.018157] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.293 [2024-07-10 11:00:11.027291] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.293 [2024-07-10 11:00:11.027684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.027845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.027881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.293 [2024-07-10 11:00:11.027899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.293 [2024-07-10 11:00:11.028046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.293 [2024-07-10 11:00:11.028215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.293 [2024-07-10 11:00:11.028238] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.293 [2024-07-10 11:00:11.028253] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.293 [2024-07-10 11:00:11.030507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.293 [2024-07-10 11:00:11.039846] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.293 [2024-07-10 11:00:11.040235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.040405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.040445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.293 [2024-07-10 11:00:11.040465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.293 [2024-07-10 11:00:11.040595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.293 [2024-07-10 11:00:11.040786] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.293 [2024-07-10 11:00:11.040809] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.293 [2024-07-10 11:00:11.040825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.293 [2024-07-10 11:00:11.043018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.293 [2024-07-10 11:00:11.052495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.293 [2024-07-10 11:00:11.052871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.053055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.053083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.293 [2024-07-10 11:00:11.053100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.293 [2024-07-10 11:00:11.053265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.293 [2024-07-10 11:00:11.053480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.293 [2024-07-10 11:00:11.053505] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.293 [2024-07-10 11:00:11.053520] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.293 [2024-07-10 11:00:11.055821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.293 [2024-07-10 11:00:11.065120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.293 [2024-07-10 11:00:11.065482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.065654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.065682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.293 [2024-07-10 11:00:11.065704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.293 [2024-07-10 11:00:11.065870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.293 [2024-07-10 11:00:11.066039] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.293 [2024-07-10 11:00:11.066062] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.293 [2024-07-10 11:00:11.066078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.293 [2024-07-10 11:00:11.068413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.293 [2024-07-10 11:00:11.077755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.293 [2024-07-10 11:00:11.078180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.078351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.078380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.293 [2024-07-10 11:00:11.078397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.293 [2024-07-10 11:00:11.078534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.293 [2024-07-10 11:00:11.078757] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.293 [2024-07-10 11:00:11.078781] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.293 [2024-07-10 11:00:11.078796] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.293 [2024-07-10 11:00:11.081004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.293 [2024-07-10 11:00:11.090283] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.293 [2024-07-10 11:00:11.090634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.090784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.090813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.293 [2024-07-10 11:00:11.090831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.293 [2024-07-10 11:00:11.090996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.293 [2024-07-10 11:00:11.091165] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.293 [2024-07-10 11:00:11.091189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.293 [2024-07-10 11:00:11.091205] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.293 [2024-07-10 11:00:11.093546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.293 [2024-07-10 11:00:11.103025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.293 [2024-07-10 11:00:11.103352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.103520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-10 11:00:11.103549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.293 [2024-07-10 11:00:11.103566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.293 [2024-07-10 11:00:11.103755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.293 [2024-07-10 11:00:11.103924] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.293 [2024-07-10 11:00:11.103948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.293 [2024-07-10 11:00:11.103964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.293 [2024-07-10 11:00:11.106340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.552 [2024-07-10 11:00:11.115741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.552 [2024-07-10 11:00:11.116091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.552 [2024-07-10 11:00:11.116259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.552 [2024-07-10 11:00:11.116285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.552 [2024-07-10 11:00:11.116301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.552 [2024-07-10 11:00:11.116445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.552 [2024-07-10 11:00:11.116678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.552 [2024-07-10 11:00:11.116703] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.552 [2024-07-10 11:00:11.116718] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.552 [2024-07-10 11:00:11.119037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.552 [2024-07-10 11:00:11.128416] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.552 [2024-07-10 11:00:11.128781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.552 [2024-07-10 11:00:11.128968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.552 [2024-07-10 11:00:11.128994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.552 [2024-07-10 11:00:11.129010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.552 [2024-07-10 11:00:11.129175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.552 [2024-07-10 11:00:11.129420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.552 [2024-07-10 11:00:11.129449] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.552 [2024-07-10 11:00:11.129463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.552 [2024-07-10 11:00:11.131782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.552 [2024-07-10 11:00:11.140907] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.552 [2024-07-10 11:00:11.141313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.552 [2024-07-10 11:00:11.141529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.552 [2024-07-10 11:00:11.141555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.552 [2024-07-10 11:00:11.141571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.552 [2024-07-10 11:00:11.141737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.552 [2024-07-10 11:00:11.141858] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.552 [2024-07-10 11:00:11.141882] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.552 [2024-07-10 11:00:11.141898] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.552 [2024-07-10 11:00:11.144239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.552 [2024-07-10 11:00:11.153541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.552 [2024-07-10 11:00:11.153874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.552 [2024-07-10 11:00:11.154035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.552 [2024-07-10 11:00:11.154062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.552 [2024-07-10 11:00:11.154078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.552 [2024-07-10 11:00:11.154239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.552 [2024-07-10 11:00:11.154420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.552 [2024-07-10 11:00:11.154454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.552 [2024-07-10 11:00:11.154484] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.552 [2024-07-10 11:00:11.156735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.552 [2024-07-10 11:00:11.166014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.552 [2024-07-10 11:00:11.166399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.166615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.166641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.553 [2024-07-10 11:00:11.166656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.553 [2024-07-10 11:00:11.166833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.553 [2024-07-10 11:00:11.167002] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.553 [2024-07-10 11:00:11.167026] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.553 [2024-07-10 11:00:11.167041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.553 [2024-07-10 11:00:11.169409] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.553 [2024-07-10 11:00:11.178571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.553 [2024-07-10 11:00:11.179002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.179248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.179288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.553 [2024-07-10 11:00:11.179303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.553 [2024-07-10 11:00:11.179514] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.553 [2024-07-10 11:00:11.179695] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.553 [2024-07-10 11:00:11.179725] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.553 [2024-07-10 11:00:11.179741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.553 [2024-07-10 11:00:11.182076] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.553 [2024-07-10 11:00:11.191098] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.553 [2024-07-10 11:00:11.191457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.191633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.191661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.553 [2024-07-10 11:00:11.191678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.553 [2024-07-10 11:00:11.191825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.553 [2024-07-10 11:00:11.191993] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.553 [2024-07-10 11:00:11.192017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.553 [2024-07-10 11:00:11.192032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.553 [2024-07-10 11:00:11.194369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.553 [2024-07-10 11:00:11.203663] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.553 [2024-07-10 11:00:11.204034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.204218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.204246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.553 [2024-07-10 11:00:11.204263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.553 [2024-07-10 11:00:11.204392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.553 [2024-07-10 11:00:11.204572] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.553 [2024-07-10 11:00:11.204597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.553 [2024-07-10 11:00:11.204613] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.553 [2024-07-10 11:00:11.206858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.553 [2024-07-10 11:00:11.216360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.553 [2024-07-10 11:00:11.216728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.216961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.216986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.553 [2024-07-10 11:00:11.217001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.553 [2024-07-10 11:00:11.217176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.553 [2024-07-10 11:00:11.217360] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.553 [2024-07-10 11:00:11.217383] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.553 [2024-07-10 11:00:11.217404] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.553 [2024-07-10 11:00:11.219694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.553 [2024-07-10 11:00:11.228655] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.553 [2024-07-10 11:00:11.228996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.229165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.229190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.553 [2024-07-10 11:00:11.229206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.553 [2024-07-10 11:00:11.229412] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.553 [2024-07-10 11:00:11.229575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.553 [2024-07-10 11:00:11.229599] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.553 [2024-07-10 11:00:11.229614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.553 [2024-07-10 11:00:11.231966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.553 [2024-07-10 11:00:11.241286] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.553 [2024-07-10 11:00:11.241590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.241770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.241809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.553 [2024-07-10 11:00:11.241825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.553 [2024-07-10 11:00:11.241996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.553 [2024-07-10 11:00:11.242118] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.553 [2024-07-10 11:00:11.242141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.553 [2024-07-10 11:00:11.242157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.553 [2024-07-10 11:00:11.244504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.553 [2024-07-10 11:00:11.253542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.553 [2024-07-10 11:00:11.253899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.254148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.553 [2024-07-10 11:00:11.254176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.553 [2024-07-10 11:00:11.254193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.553 [2024-07-10 11:00:11.254357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.554 [2024-07-10 11:00:11.254517] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.554 [2024-07-10 11:00:11.254541] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.554 [2024-07-10 11:00:11.254557] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.554 [2024-07-10 11:00:11.256859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.554 [2024-07-10 11:00:11.266073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.554 [2024-07-10 11:00:11.266503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.266685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.266713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.554 [2024-07-10 11:00:11.266730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.554 [2024-07-10 11:00:11.266894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.554 [2024-07-10 11:00:11.267063] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.554 [2024-07-10 11:00:11.267086] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.554 [2024-07-10 11:00:11.267101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.554 [2024-07-10 11:00:11.269385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.554 [2024-07-10 11:00:11.278498] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.554 [2024-07-10 11:00:11.278914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.279172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.279215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.554 [2024-07-10 11:00:11.279233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.554 [2024-07-10 11:00:11.279362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.554 [2024-07-10 11:00:11.279488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.554 [2024-07-10 11:00:11.279512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.554 [2024-07-10 11:00:11.279528] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.554 [2024-07-10 11:00:11.281665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.554 [2024-07-10 11:00:11.291034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.554 [2024-07-10 11:00:11.291476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.291659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.291684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.554 [2024-07-10 11:00:11.291699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.554 [2024-07-10 11:00:11.291892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.554 [2024-07-10 11:00:11.292050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.554 [2024-07-10 11:00:11.292073] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.554 [2024-07-10 11:00:11.292089] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.554 [2024-07-10 11:00:11.294486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.554 [2024-07-10 11:00:11.303688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.554 [2024-07-10 11:00:11.304068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.304206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.304233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.554 [2024-07-10 11:00:11.304250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.554 [2024-07-10 11:00:11.304445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.554 [2024-07-10 11:00:11.304614] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.554 [2024-07-10 11:00:11.304637] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.554 [2024-07-10 11:00:11.304653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.554 [2024-07-10 11:00:11.307026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.554 [2024-07-10 11:00:11.316216] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.554 [2024-07-10 11:00:11.316565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.316743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.316771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.554 [2024-07-10 11:00:11.316788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.554 [2024-07-10 11:00:11.317006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.554 [2024-07-10 11:00:11.317193] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.554 [2024-07-10 11:00:11.317216] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.554 [2024-07-10 11:00:11.317232] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.554 [2024-07-10 11:00:11.319544] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.554 [2024-07-10 11:00:11.328815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.554 [2024-07-10 11:00:11.329240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.329405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.329442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.554 [2024-07-10 11:00:11.329461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.554 [2024-07-10 11:00:11.329590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.554 [2024-07-10 11:00:11.329722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.554 [2024-07-10 11:00:11.329744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.554 [2024-07-10 11:00:11.329761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.554 [2024-07-10 11:00:11.332132] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.554 [2024-07-10 11:00:11.341547] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.554 [2024-07-10 11:00:11.341925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.342100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.342128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.554 [2024-07-10 11:00:11.342145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.554 [2024-07-10 11:00:11.342273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.554 [2024-07-10 11:00:11.342472] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.554 [2024-07-10 11:00:11.342496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.554 [2024-07-10 11:00:11.342512] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.554 [2024-07-10 11:00:11.344846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.554 [2024-07-10 11:00:11.353949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.554 [2024-07-10 11:00:11.354373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.554 [2024-07-10 11:00:11.354570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.555 [2024-07-10 11:00:11.354598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.555 [2024-07-10 11:00:11.354616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.555 [2024-07-10 11:00:11.354762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.555 [2024-07-10 11:00:11.354914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.555 [2024-07-10 11:00:11.354937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.555 [2024-07-10 11:00:11.354953] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.555 [2024-07-10 11:00:11.357126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.555 [2024-07-10 11:00:11.366530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.555 [2024-07-10 11:00:11.366949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.555 [2024-07-10 11:00:11.367208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.555 [2024-07-10 11:00:11.367236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.555 [2024-07-10 11:00:11.367253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.555 [2024-07-10 11:00:11.367418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.555 [2024-07-10 11:00:11.367582] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.555 [2024-07-10 11:00:11.367605] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.555 [2024-07-10 11:00:11.367621] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.555 [2024-07-10 11:00:11.369972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.814 [2024-07-10 11:00:11.379296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.814 [2024-07-10 11:00:11.379630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.379787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.379821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.814 [2024-07-10 11:00:11.379839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.814 [2024-07-10 11:00:11.380004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.814 [2024-07-10 11:00:11.380137] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.814 [2024-07-10 11:00:11.380160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.814 [2024-07-10 11:00:11.380177] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.814 [2024-07-10 11:00:11.382772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.814 [2024-07-10 11:00:11.391704] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.814 [2024-07-10 11:00:11.392064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.392241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.392268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.814 [2024-07-10 11:00:11.392286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.814 [2024-07-10 11:00:11.392462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.814 [2024-07-10 11:00:11.392667] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.814 [2024-07-10 11:00:11.392691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.814 [2024-07-10 11:00:11.392707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.814 [2024-07-10 11:00:11.395014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.814 [2024-07-10 11:00:11.404269] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.814 [2024-07-10 11:00:11.404617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.404818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.404861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.814 [2024-07-10 11:00:11.404878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.814 [2024-07-10 11:00:11.405061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.814 [2024-07-10 11:00:11.405211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.814 [2024-07-10 11:00:11.405235] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.814 [2024-07-10 11:00:11.405250] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.814 [2024-07-10 11:00:11.407667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.814 [2024-07-10 11:00:11.417069] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.814 [2024-07-10 11:00:11.417498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.417678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.417707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.814 [2024-07-10 11:00:11.417730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.814 [2024-07-10 11:00:11.417897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.814 [2024-07-10 11:00:11.418084] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.814 [2024-07-10 11:00:11.418107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.814 [2024-07-10 11:00:11.418123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.814 [2024-07-10 11:00:11.420451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.814 [2024-07-10 11:00:11.429899] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.814 [2024-07-10 11:00:11.430272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.430466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.430495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.814 [2024-07-10 11:00:11.430513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.814 [2024-07-10 11:00:11.430695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.814 [2024-07-10 11:00:11.430863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.814 [2024-07-10 11:00:11.430886] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.814 [2024-07-10 11:00:11.430902] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.814 [2024-07-10 11:00:11.433256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.814 [2024-07-10 11:00:11.442611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.814 [2024-07-10 11:00:11.442985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.443182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.443245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.814 [2024-07-10 11:00:11.443262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.814 [2024-07-10 11:00:11.443437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.814 [2024-07-10 11:00:11.443588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.814 [2024-07-10 11:00:11.443612] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.814 [2024-07-10 11:00:11.443628] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.814 [2024-07-10 11:00:11.445961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.814 [2024-07-10 11:00:11.455266] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.814 [2024-07-10 11:00:11.455648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.455896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-07-10 11:00:11.455923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.814 [2024-07-10 11:00:11.455941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.815 [2024-07-10 11:00:11.456147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.815 [2024-07-10 11:00:11.456370] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.815 [2024-07-10 11:00:11.456393] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.815 [2024-07-10 11:00:11.456409] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.815 [2024-07-10 11:00:11.458626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.815 [2024-07-10 11:00:11.468058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.815 [2024-07-10 11:00:11.468438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.468614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.468642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.815 [2024-07-10 11:00:11.468659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.815 [2024-07-10 11:00:11.468805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.815 [2024-07-10 11:00:11.468955] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.815 [2024-07-10 11:00:11.468979] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.815 [2024-07-10 11:00:11.468994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.815 [2024-07-10 11:00:11.471539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.815 [2024-07-10 11:00:11.480783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.815 [2024-07-10 11:00:11.481151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.481338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.481363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.815 [2024-07-10 11:00:11.481378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.815 [2024-07-10 11:00:11.481503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.815 [2024-07-10 11:00:11.481664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.815 [2024-07-10 11:00:11.481688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.815 [2024-07-10 11:00:11.481704] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.815 [2024-07-10 11:00:11.484001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.815 [2024-07-10 11:00:11.493556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.815 [2024-07-10 11:00:11.493915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.494115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.494139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.815 [2024-07-10 11:00:11.494154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.815 [2024-07-10 11:00:11.494390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.815 [2024-07-10 11:00:11.494628] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.815 [2024-07-10 11:00:11.494653] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.815 [2024-07-10 11:00:11.494669] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.815 [2024-07-10 11:00:11.496966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.815 [2024-07-10 11:00:11.506357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.815 [2024-07-10 11:00:11.506735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.506904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.506930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.815 [2024-07-10 11:00:11.506945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.815 [2024-07-10 11:00:11.507124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.815 [2024-07-10 11:00:11.507330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.815 [2024-07-10 11:00:11.507354] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.815 [2024-07-10 11:00:11.507370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.815 [2024-07-10 11:00:11.509517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.815 [2024-07-10 11:00:11.519059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.815 [2024-07-10 11:00:11.519416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.519584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.519612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.815 [2024-07-10 11:00:11.519629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.815 [2024-07-10 11:00:11.519776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.815 [2024-07-10 11:00:11.519945] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.815 [2024-07-10 11:00:11.519968] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.815 [2024-07-10 11:00:11.519984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.815 [2024-07-10 11:00:11.522340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.815 [2024-07-10 11:00:11.531775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.815 [2024-07-10 11:00:11.532129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.532259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.532283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.815 [2024-07-10 11:00:11.532298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.815 [2024-07-10 11:00:11.532460] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.815 [2024-07-10 11:00:11.532612] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.815 [2024-07-10 11:00:11.532636] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.815 [2024-07-10 11:00:11.532657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.815 [2024-07-10 11:00:11.535065] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.815 [2024-07-10 11:00:11.544387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.815 [2024-07-10 11:00:11.544758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.544956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.544984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.815 [2024-07-10 11:00:11.545001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.815 [2024-07-10 11:00:11.545130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.815 [2024-07-10 11:00:11.545316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.815 [2024-07-10 11:00:11.545340] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.815 [2024-07-10 11:00:11.545356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.815 [2024-07-10 11:00:11.547698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.815 [2024-07-10 11:00:11.557064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.815 [2024-07-10 11:00:11.557380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.557564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-07-10 11:00:11.557592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.815 [2024-07-10 11:00:11.557609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.815 [2024-07-10 11:00:11.557774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.815 [2024-07-10 11:00:11.557925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.815 [2024-07-10 11:00:11.557949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.815 [2024-07-10 11:00:11.557964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.815 [2024-07-10 11:00:11.560316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.815 [2024-07-10 11:00:11.569860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.815 [2024-07-10 11:00:11.570218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.570354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.570384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.816 [2024-07-10 11:00:11.570401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.816 [2024-07-10 11:00:11.570557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.816 [2024-07-10 11:00:11.570708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.816 [2024-07-10 11:00:11.570732] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.816 [2024-07-10 11:00:11.570747] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.816 [2024-07-10 11:00:11.573084] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.816 [2024-07-10 11:00:11.582528] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.816 [2024-07-10 11:00:11.582969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.583188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.583237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.816 [2024-07-10 11:00:11.583255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.816 [2024-07-10 11:00:11.583402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.816 [2024-07-10 11:00:11.583563] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.816 [2024-07-10 11:00:11.583587] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.816 [2024-07-10 11:00:11.583603] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.816 [2024-07-10 11:00:11.585872] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.816 [2024-07-10 11:00:11.595029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.816 [2024-07-10 11:00:11.595461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.595649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.595692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.816 [2024-07-10 11:00:11.595709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.816 [2024-07-10 11:00:11.595820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.816 [2024-07-10 11:00:11.596005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.816 [2024-07-10 11:00:11.596029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.816 [2024-07-10 11:00:11.596044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.816 [2024-07-10 11:00:11.598255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.816 [2024-07-10 11:00:11.607653] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.816 [2024-07-10 11:00:11.608011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.608231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.608259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.816 [2024-07-10 11:00:11.608276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.816 [2024-07-10 11:00:11.608486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.816 [2024-07-10 11:00:11.608656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.816 [2024-07-10 11:00:11.608679] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.816 [2024-07-10 11:00:11.608695] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.816 [2024-07-10 11:00:11.611135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.816 [2024-07-10 11:00:11.620006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.816 [2024-07-10 11:00:11.620316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.620510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.620536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.816 [2024-07-10 11:00:11.620552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.816 [2024-07-10 11:00:11.620736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.816 [2024-07-10 11:00:11.620887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.816 [2024-07-10 11:00:11.620910] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.816 [2024-07-10 11:00:11.620925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.816 [2024-07-10 11:00:11.623196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.816 [2024-07-10 11:00:11.632491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.816 [2024-07-10 11:00:11.632901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.633056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-07-10 11:00:11.633082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:54.816 [2024-07-10 11:00:11.633098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:54.816 [2024-07-10 11:00:11.633230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:54.816 [2024-07-10 11:00:11.633392] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.816 [2024-07-10 11:00:11.633416] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.816 [2024-07-10 11:00:11.633440] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.816 [2024-07-10 11:00:11.636111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.076 [2024-07-10 11:00:11.645268] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.076 [2024-07-10 11:00:11.645657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.645826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.645852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.076 [2024-07-10 11:00:11.645868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.076 [2024-07-10 11:00:11.646032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.076 [2024-07-10 11:00:11.646220] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.076 [2024-07-10 11:00:11.646244] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.076 [2024-07-10 11:00:11.646260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.076 [2024-07-10 11:00:11.648496] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.076 [2024-07-10 11:00:11.657900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.076 [2024-07-10 11:00:11.658196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.658402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.658434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.076 [2024-07-10 11:00:11.658451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.076 [2024-07-10 11:00:11.658646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.076 [2024-07-10 11:00:11.658792] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.076 [2024-07-10 11:00:11.658816] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.076 [2024-07-10 11:00:11.658831] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.076 [2024-07-10 11:00:11.661237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.076 [2024-07-10 11:00:11.670379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.076 [2024-07-10 11:00:11.670699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.670984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.671012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.076 [2024-07-10 11:00:11.671029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.076 [2024-07-10 11:00:11.671175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.076 [2024-07-10 11:00:11.671344] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.076 [2024-07-10 11:00:11.671367] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.076 [2024-07-10 11:00:11.671383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.076 [2024-07-10 11:00:11.673475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.076 [2024-07-10 11:00:11.682833] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.076 [2024-07-10 11:00:11.683379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.683571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.683597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.076 [2024-07-10 11:00:11.683613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.076 [2024-07-10 11:00:11.683762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.076 [2024-07-10 11:00:11.683909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.076 [2024-07-10 11:00:11.683932] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.076 [2024-07-10 11:00:11.683948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.076 [2024-07-10 11:00:11.686050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.076 [2024-07-10 11:00:11.695418] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.076 [2024-07-10 11:00:11.695750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.695976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.696009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.076 [2024-07-10 11:00:11.696028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.076 [2024-07-10 11:00:11.696212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.076 [2024-07-10 11:00:11.696399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.076 [2024-07-10 11:00:11.696422] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.076 [2024-07-10 11:00:11.696448] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.076 [2024-07-10 11:00:11.698908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.076 [2024-07-10 11:00:11.708015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.076 [2024-07-10 11:00:11.708379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.708548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.708574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.076 [2024-07-10 11:00:11.708590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.076 [2024-07-10 11:00:11.708807] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.076 [2024-07-10 11:00:11.708994] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.076 [2024-07-10 11:00:11.709018] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.076 [2024-07-10 11:00:11.709033] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.076 [2024-07-10 11:00:11.711349] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.076 [2024-07-10 11:00:11.720540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.076 [2024-07-10 11:00:11.720875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.721145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.076 [2024-07-10 11:00:11.721197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.076 [2024-07-10 11:00:11.721215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.077 [2024-07-10 11:00:11.721415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.077 [2024-07-10 11:00:11.721605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.077 [2024-07-10 11:00:11.721628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.077 [2024-07-10 11:00:11.721644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.077 [2024-07-10 11:00:11.723874] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.077 [2024-07-10 11:00:11.733062] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.077 [2024-07-10 11:00:11.733436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.733635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.733662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.077 [2024-07-10 11:00:11.733685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.077 [2024-07-10 11:00:11.733887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.077 [2024-07-10 11:00:11.734074] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.077 [2024-07-10 11:00:11.734097] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.077 [2024-07-10 11:00:11.734113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.077 [2024-07-10 11:00:11.736381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.077 [2024-07-10 11:00:11.745575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.077 [2024-07-10 11:00:11.745939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.746086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.746112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.077 [2024-07-10 11:00:11.746128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.077 [2024-07-10 11:00:11.746288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.077 [2024-07-10 11:00:11.746504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.077 [2024-07-10 11:00:11.746526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.077 [2024-07-10 11:00:11.746540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.077 [2024-07-10 11:00:11.749048] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.077 [2024-07-10 11:00:11.758154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.077 [2024-07-10 11:00:11.758584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.758744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.758787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.077 [2024-07-10 11:00:11.758805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.077 [2024-07-10 11:00:11.758952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.077 [2024-07-10 11:00:11.759120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.077 [2024-07-10 11:00:11.759144] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.077 [2024-07-10 11:00:11.759159] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.077 [2024-07-10 11:00:11.761501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.077 [2024-07-10 11:00:11.770889] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.077 [2024-07-10 11:00:11.771222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.771394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.771422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.077 [2024-07-10 11:00:11.771459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.077 [2024-07-10 11:00:11.771630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.077 [2024-07-10 11:00:11.771800] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.077 [2024-07-10 11:00:11.771823] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.077 [2024-07-10 11:00:11.771839] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.077 [2024-07-10 11:00:11.774242] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.077 [2024-07-10 11:00:11.783797] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.077 [2024-07-10 11:00:11.784160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.784356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.784384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.077 [2024-07-10 11:00:11.784401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.077 [2024-07-10 11:00:11.784598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.077 [2024-07-10 11:00:11.784764] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.077 [2024-07-10 11:00:11.784802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.077 [2024-07-10 11:00:11.784817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.077 [2024-07-10 11:00:11.787134] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.077 [2024-07-10 11:00:11.796295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.077 [2024-07-10 11:00:11.796680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.796897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.796922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.077 [2024-07-10 11:00:11.796938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.077 [2024-07-10 11:00:11.797070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.077 [2024-07-10 11:00:11.797221] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.077 [2024-07-10 11:00:11.797244] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.077 [2024-07-10 11:00:11.797260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.077 [2024-07-10 11:00:11.799512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.077 [2024-07-10 11:00:11.808971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.077 [2024-07-10 11:00:11.809366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.809559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.077 [2024-07-10 11:00:11.809585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.077 [2024-07-10 11:00:11.809601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.077 [2024-07-10 11:00:11.809771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.077 [2024-07-10 11:00:11.809968] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.077 [2024-07-10 11:00:11.809992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.077 [2024-07-10 11:00:11.810008] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.077 [2024-07-10 11:00:11.812309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.077 [2024-07-10 11:00:11.821417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.077 [2024-07-10 11:00:11.821778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.822015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.822040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.078 [2024-07-10 11:00:11.822071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.078 [2024-07-10 11:00:11.822223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.078 [2024-07-10 11:00:11.822374] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.078 [2024-07-10 11:00:11.822397] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.078 [2024-07-10 11:00:11.822413] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.078 [2024-07-10 11:00:11.824720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.078 [2024-07-10 11:00:11.833948] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.078 [2024-07-10 11:00:11.834307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.834471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.834512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.078 [2024-07-10 11:00:11.834530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.078 [2024-07-10 11:00:11.834713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.078 [2024-07-10 11:00:11.834900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.078 [2024-07-10 11:00:11.834923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.078 [2024-07-10 11:00:11.834939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.078 [2024-07-10 11:00:11.837361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.078 [2024-07-10 11:00:11.846361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.078 [2024-07-10 11:00:11.846707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.846850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.846878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.078 [2024-07-10 11:00:11.846896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.078 [2024-07-10 11:00:11.847042] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.078 [2024-07-10 11:00:11.847265] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.078 [2024-07-10 11:00:11.847294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.078 [2024-07-10 11:00:11.847310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.078 [2024-07-10 11:00:11.849759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.078 [2024-07-10 11:00:11.858964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.078 [2024-07-10 11:00:11.859320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.859490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.859529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.078 [2024-07-10 11:00:11.859546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.078 [2024-07-10 11:00:11.859711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.078 [2024-07-10 11:00:11.859844] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.078 [2024-07-10 11:00:11.859867] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.078 [2024-07-10 11:00:11.859882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.078 [2024-07-10 11:00:11.862159] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.078 [2024-07-10 11:00:11.871626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.078 [2024-07-10 11:00:11.872006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.872169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.872197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.078 [2024-07-10 11:00:11.872214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.078 [2024-07-10 11:00:11.872397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.078 [2024-07-10 11:00:11.872520] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.078 [2024-07-10 11:00:11.872544] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.078 [2024-07-10 11:00:11.872560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.078 [2024-07-10 11:00:11.874784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.078 [2024-07-10 11:00:11.884284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.078 [2024-07-10 11:00:11.884628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.884791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.884819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.078 [2024-07-10 11:00:11.884836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.078 [2024-07-10 11:00:11.884983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.078 [2024-07-10 11:00:11.885133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.078 [2024-07-10 11:00:11.885156] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.078 [2024-07-10 11:00:11.885177] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.078 [2024-07-10 11:00:11.887520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.078 [2024-07-10 11:00:11.896993] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.078 [2024-07-10 11:00:11.897504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.897718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.078 [2024-07-10 11:00:11.897753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.078 [2024-07-10 11:00:11.897772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.078 [2024-07-10 11:00:11.897873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.078 [2024-07-10 11:00:11.898077] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.078 [2024-07-10 11:00:11.898103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.078 [2024-07-10 11:00:11.898118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.338 [2024-07-10 11:00:11.900772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.338 [2024-07-10 11:00:11.909580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.338 [2024-07-10 11:00:11.909955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.910230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.910258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.338 [2024-07-10 11:00:11.910275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.338 [2024-07-10 11:00:11.910453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.338 [2024-07-10 11:00:11.910623] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.338 [2024-07-10 11:00:11.910647] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.338 [2024-07-10 11:00:11.910663] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.338 [2024-07-10 11:00:11.912942] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.338 [2024-07-10 11:00:11.922058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.338 [2024-07-10 11:00:11.922499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.922683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.922708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.338 [2024-07-10 11:00:11.922723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.338 [2024-07-10 11:00:11.922887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.338 [2024-07-10 11:00:11.923054] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.338 [2024-07-10 11:00:11.923077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.338 [2024-07-10 11:00:11.923093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.338 [2024-07-10 11:00:11.925363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.338 [2024-07-10 11:00:11.934712] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.338 [2024-07-10 11:00:11.935033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.935284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.935312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.338 [2024-07-10 11:00:11.935329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.338 [2024-07-10 11:00:11.935515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.338 [2024-07-10 11:00:11.935702] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.338 [2024-07-10 11:00:11.935725] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.338 [2024-07-10 11:00:11.935741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.338 [2024-07-10 11:00:11.937949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.338 [2024-07-10 11:00:11.947387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.338 [2024-07-10 11:00:11.947710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.947886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.947914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.338 [2024-07-10 11:00:11.947931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.338 [2024-07-10 11:00:11.948061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.338 [2024-07-10 11:00:11.948175] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.338 [2024-07-10 11:00:11.948198] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.338 [2024-07-10 11:00:11.948213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.338 [2024-07-10 11:00:11.950649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.338 [2024-07-10 11:00:11.960105] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.338 [2024-07-10 11:00:11.960478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.960658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.960686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.338 [2024-07-10 11:00:11.960703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.338 [2024-07-10 11:00:11.960886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.338 [2024-07-10 11:00:11.961091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.338 [2024-07-10 11:00:11.961114] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.338 [2024-07-10 11:00:11.961129] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.338 [2024-07-10 11:00:11.963620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.338 [2024-07-10 11:00:11.972809] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.338 [2024-07-10 11:00:11.973177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.973323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.973350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.338 [2024-07-10 11:00:11.973367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.338 [2024-07-10 11:00:11.973524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.338 [2024-07-10 11:00:11.973639] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.338 [2024-07-10 11:00:11.973662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.338 [2024-07-10 11:00:11.973678] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.338 [2024-07-10 11:00:11.975829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.338 [2024-07-10 11:00:11.985204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.338 [2024-07-10 11:00:11.985522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.985692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.338 [2024-07-10 11:00:11.985720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.338 [2024-07-10 11:00:11.985737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.338 [2024-07-10 11:00:11.985866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.338 [2024-07-10 11:00:11.986015] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.338 [2024-07-10 11:00:11.986038] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.338 [2024-07-10 11:00:11.986054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.338 [2024-07-10 11:00:11.988391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.338 [2024-07-10 11:00:11.997839] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.338 [2024-07-10 11:00:11.998215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:11.998392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:11.998421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.339 [2024-07-10 11:00:11.998452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.339 [2024-07-10 11:00:11.998582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.339 [2024-07-10 11:00:11.998770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.339 [2024-07-10 11:00:11.998793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.339 [2024-07-10 11:00:11.998808] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.339 [2024-07-10 11:00:12.001086] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.339 [2024-07-10 11:00:12.010470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.339 [2024-07-10 11:00:12.010806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.010984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.011013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.339 [2024-07-10 11:00:12.011030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.339 [2024-07-10 11:00:12.011176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.339 [2024-07-10 11:00:12.011326] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.339 [2024-07-10 11:00:12.011350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.339 [2024-07-10 11:00:12.011366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.339 [2024-07-10 11:00:12.013634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.339 [2024-07-10 11:00:12.022988] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.339 [2024-07-10 11:00:12.023361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.023609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.023638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.339 [2024-07-10 11:00:12.023655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.339 [2024-07-10 11:00:12.023803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.339 [2024-07-10 11:00:12.023971] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.339 [2024-07-10 11:00:12.023995] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.339 [2024-07-10 11:00:12.024010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.339 [2024-07-10 11:00:12.026530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.339 [2024-07-10 11:00:12.035658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.339 [2024-07-10 11:00:12.036049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.036246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.036273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.339 [2024-07-10 11:00:12.036290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.339 [2024-07-10 11:00:12.036419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.339 [2024-07-10 11:00:12.036581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.339 [2024-07-10 11:00:12.036605] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.339 [2024-07-10 11:00:12.036620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.339 [2024-07-10 11:00:12.038772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.339 [2024-07-10 11:00:12.048238] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.339 [2024-07-10 11:00:12.048593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.048804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.048850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.339 [2024-07-10 11:00:12.048866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.339 [2024-07-10 11:00:12.049052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.339 [2024-07-10 11:00:12.049234] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.339 [2024-07-10 11:00:12.049258] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.339 [2024-07-10 11:00:12.049273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.339 [2024-07-10 11:00:12.051615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.339 [2024-07-10 11:00:12.060744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.339 [2024-07-10 11:00:12.061136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.061312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.061340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.339 [2024-07-10 11:00:12.061357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.339 [2024-07-10 11:00:12.061540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.339 [2024-07-10 11:00:12.061727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.339 [2024-07-10 11:00:12.061750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.339 [2024-07-10 11:00:12.061766] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.339 [2024-07-10 11:00:12.064190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.339 [2024-07-10 11:00:12.073283] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.339 [2024-07-10 11:00:12.073609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.073859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.073910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.339 [2024-07-10 11:00:12.073927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.339 [2024-07-10 11:00:12.074073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.339 [2024-07-10 11:00:12.074188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.339 [2024-07-10 11:00:12.074211] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.339 [2024-07-10 11:00:12.074226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.339 [2024-07-10 11:00:12.076399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.339 [2024-07-10 11:00:12.085888] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.339 [2024-07-10 11:00:12.086330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.086526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.086555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.339 [2024-07-10 11:00:12.086577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.339 [2024-07-10 11:00:12.086725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.339 [2024-07-10 11:00:12.086876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.339 [2024-07-10 11:00:12.086899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.339 [2024-07-10 11:00:12.086914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.339 [2024-07-10 11:00:12.089338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.339 [2024-07-10 11:00:12.098364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.339 [2024-07-10 11:00:12.098658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.098829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.339 [2024-07-10 11:00:12.098857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.339 [2024-07-10 11:00:12.098874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.339 [2024-07-10 11:00:12.099057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.339 [2024-07-10 11:00:12.099261] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.339 [2024-07-10 11:00:12.099284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.339 [2024-07-10 11:00:12.099299] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.340 [2024-07-10 11:00:12.101768] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.340 [2024-07-10 11:00:12.110930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.340 [2024-07-10 11:00:12.111294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.340 [2024-07-10 11:00:12.111456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.340 [2024-07-10 11:00:12.111496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.340 [2024-07-10 11:00:12.111514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.340 [2024-07-10 11:00:12.111643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.340 [2024-07-10 11:00:12.111812] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.340 [2024-07-10 11:00:12.111835] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.340 [2024-07-10 11:00:12.111850] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.340 [2024-07-10 11:00:12.114194] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.340 [2024-07-10 11:00:12.123314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.340 [2024-07-10 11:00:12.123619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.340 [2024-07-10 11:00:12.123899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.340 [2024-07-10 11:00:12.123948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.340 [2024-07-10 11:00:12.123966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.340 [2024-07-10 11:00:12.124155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.340 [2024-07-10 11:00:12.124306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.340 [2024-07-10 11:00:12.124329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.340 [2024-07-10 11:00:12.124344] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.340 [2024-07-10 11:00:12.126707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.340 [2024-07-10 11:00:12.135961] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.340 [2024-07-10 11:00:12.136331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.340 [2024-07-10 11:00:12.136544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.340 [2024-07-10 11:00:12.136573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.340 [2024-07-10 11:00:12.136591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.340 [2024-07-10 11:00:12.136773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.340 [2024-07-10 11:00:12.136996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.340 [2024-07-10 11:00:12.137019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.340 [2024-07-10 11:00:12.137035] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.340 [2024-07-10 11:00:12.139482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.340 [2024-07-10 11:00:12.148565] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.340 [2024-07-10 11:00:12.148879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.340 [2024-07-10 11:00:12.149065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.340 [2024-07-10 11:00:12.149090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.340 [2024-07-10 11:00:12.149120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.340 [2024-07-10 11:00:12.149279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.340 [2024-07-10 11:00:12.149477] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.340 [2024-07-10 11:00:12.149501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.340 [2024-07-10 11:00:12.149517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.340 [2024-07-10 11:00:12.151813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.340 [2024-07-10 11:00:12.161175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.340 [2024-07-10 11:00:12.161494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.599 [2024-07-10 11:00:12.161646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.599 [2024-07-10 11:00:12.161684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.599 [2024-07-10 11:00:12.161708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.599 [2024-07-10 11:00:12.161897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.599 [2024-07-10 11:00:12.162074] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.599 [2024-07-10 11:00:12.162109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.599 [2024-07-10 11:00:12.162132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.599 [2024-07-10 11:00:12.164310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.599 [2024-07-10 11:00:12.173923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.599 [2024-07-10 11:00:12.174342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.599 [2024-07-10 11:00:12.174531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.599 [2024-07-10 11:00:12.174558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.599 [2024-07-10 11:00:12.174574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.599 [2024-07-10 11:00:12.174738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.599 [2024-07-10 11:00:12.174956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.599 [2024-07-10 11:00:12.174981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.599 [2024-07-10 11:00:12.174996] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.600 [2024-07-10 11:00:12.177345] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.600 [2024-07-10 11:00:12.186526] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.600 [2024-07-10 11:00:12.186900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.187227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.187274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.600 [2024-07-10 11:00:12.187292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.600 [2024-07-10 11:00:12.187421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.600 [2024-07-10 11:00:12.187595] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.600 [2024-07-10 11:00:12.187616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.600 [2024-07-10 11:00:12.187630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.600 [2024-07-10 11:00:12.189944] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.600 [2024-07-10 11:00:12.199192] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.600 [2024-07-10 11:00:12.199591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.199730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.199756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.600 [2024-07-10 11:00:12.199771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.600 [2024-07-10 11:00:12.199952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.600 [2024-07-10 11:00:12.200143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.600 [2024-07-10 11:00:12.200167] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.600 [2024-07-10 11:00:12.200191] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.600 [2024-07-10 11:00:12.202540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.600 [2024-07-10 11:00:12.211719] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.600 [2024-07-10 11:00:12.212115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.212294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.212319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.600 [2024-07-10 11:00:12.212335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.600 [2024-07-10 11:00:12.212476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.600 [2024-07-10 11:00:12.212597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.600 [2024-07-10 11:00:12.212621] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.600 [2024-07-10 11:00:12.212637] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.600 [2024-07-10 11:00:12.214906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.600 [2024-07-10 11:00:12.224317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.600 [2024-07-10 11:00:12.224637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.224865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.224911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.600 [2024-07-10 11:00:12.224929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.600 [2024-07-10 11:00:12.225129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.600 [2024-07-10 11:00:12.225280] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.600 [2024-07-10 11:00:12.225304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.600 [2024-07-10 11:00:12.225319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.600 [2024-07-10 11:00:12.227696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.600 [2024-07-10 11:00:12.236916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.600 [2024-07-10 11:00:12.237273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.237434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.237460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.600 [2024-07-10 11:00:12.237476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.600 [2024-07-10 11:00:12.237624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.600 [2024-07-10 11:00:12.237775] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.600 [2024-07-10 11:00:12.237798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.600 [2024-07-10 11:00:12.237814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.600 [2024-07-10 11:00:12.240136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.600 [2024-07-10 11:00:12.249457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.600 [2024-07-10 11:00:12.249812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.250011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.250036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.600 [2024-07-10 11:00:12.250052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.600 [2024-07-10 11:00:12.250184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.600 [2024-07-10 11:00:12.250359] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.600 [2024-07-10 11:00:12.250382] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.600 [2024-07-10 11:00:12.250398] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.600 [2024-07-10 11:00:12.252825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.600 [2024-07-10 11:00:12.262169] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.600 [2024-07-10 11:00:12.262522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.262692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.262721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.600 [2024-07-10 11:00:12.262738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.600 [2024-07-10 11:00:12.262903] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.600 [2024-07-10 11:00:12.263017] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.600 [2024-07-10 11:00:12.263040] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.600 [2024-07-10 11:00:12.263056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.600 [2024-07-10 11:00:12.265393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.600 [2024-07-10 11:00:12.274582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.600 [2024-07-10 11:00:12.274905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.275081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.275107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.600 [2024-07-10 11:00:12.275123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.600 [2024-07-10 11:00:12.275286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.600 [2024-07-10 11:00:12.275463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.600 [2024-07-10 11:00:12.275488] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.600 [2024-07-10 11:00:12.275504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.600 [2024-07-10 11:00:12.277773] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.600 [2024-07-10 11:00:12.287188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.600 [2024-07-10 11:00:12.287531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.287705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.600 [2024-07-10 11:00:12.287733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.600 [2024-07-10 11:00:12.287751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.600 [2024-07-10 11:00:12.287897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.601 [2024-07-10 11:00:12.288046] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.601 [2024-07-10 11:00:12.288070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.601 [2024-07-10 11:00:12.288085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.601 [2024-07-10 11:00:12.290364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.601 [2024-07-10 11:00:12.299922] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.601 [2024-07-10 11:00:12.300251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.300395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.300422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.601 [2024-07-10 11:00:12.300450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.601 [2024-07-10 11:00:12.300614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.601 [2024-07-10 11:00:12.300765] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.601 [2024-07-10 11:00:12.300788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.601 [2024-07-10 11:00:12.300804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.601 [2024-07-10 11:00:12.303281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.601 [2024-07-10 11:00:12.312490] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.601 [2024-07-10 11:00:12.312918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.313138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.313162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.601 [2024-07-10 11:00:12.313177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.601 [2024-07-10 11:00:12.313305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.601 [2024-07-10 11:00:12.313518] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.601 [2024-07-10 11:00:12.313542] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.601 [2024-07-10 11:00:12.313558] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.601 [2024-07-10 11:00:12.315993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.601 [2024-07-10 11:00:12.325011] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.601 [2024-07-10 11:00:12.325409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.325661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.325691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.601 [2024-07-10 11:00:12.325709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.601 [2024-07-10 11:00:12.325874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.601 [2024-07-10 11:00:12.326060] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.601 [2024-07-10 11:00:12.326084] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.601 [2024-07-10 11:00:12.326100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.601 [2024-07-10 11:00:12.328533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.601 [2024-07-10 11:00:12.337684] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.601 [2024-07-10 11:00:12.338038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.338216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.338263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.601 [2024-07-10 11:00:12.338280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.601 [2024-07-10 11:00:12.338477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.601 [2024-07-10 11:00:12.338610] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.601 [2024-07-10 11:00:12.338634] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.601 [2024-07-10 11:00:12.338650] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.601 [2024-07-10 11:00:12.341077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.601 [2024-07-10 11:00:12.349922] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.601 [2024-07-10 11:00:12.350251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.350402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.350437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.601 [2024-07-10 11:00:12.350456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.601 [2024-07-10 11:00:12.350637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.601 [2024-07-10 11:00:12.350824] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.601 [2024-07-10 11:00:12.350847] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.601 [2024-07-10 11:00:12.350862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.601 [2024-07-10 11:00:12.353124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.601 [2024-07-10 11:00:12.362509] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.601 [2024-07-10 11:00:12.362878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.363124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.363171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.601 [2024-07-10 11:00:12.363189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.601 [2024-07-10 11:00:12.363372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.601 [2024-07-10 11:00:12.363498] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.601 [2024-07-10 11:00:12.363522] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.601 [2024-07-10 11:00:12.363538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.601 [2024-07-10 11:00:12.365745] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.601 [2024-07-10 11:00:12.374988] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.601 [2024-07-10 11:00:12.375366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.375542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.375571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.601 [2024-07-10 11:00:12.375588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.601 [2024-07-10 11:00:12.375735] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.601 [2024-07-10 11:00:12.375885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.601 [2024-07-10 11:00:12.375908] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.601 [2024-07-10 11:00:12.375923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.601 [2024-07-10 11:00:12.378136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.601 [2024-07-10 11:00:12.387721] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.601 [2024-07-10 11:00:12.388144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.388363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.388391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.601 [2024-07-10 11:00:12.388408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.601 [2024-07-10 11:00:12.388600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.601 [2024-07-10 11:00:12.388787] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.601 [2024-07-10 11:00:12.388811] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.601 [2024-07-10 11:00:12.388826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.601 [2024-07-10 11:00:12.390985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.601 [2024-07-10 11:00:12.400459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.601 [2024-07-10 11:00:12.400862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.601 [2024-07-10 11:00:12.401050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.602 [2024-07-10 11:00:12.401075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.602 [2024-07-10 11:00:12.401096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.602 [2024-07-10 11:00:12.401228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.602 [2024-07-10 11:00:12.401447] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.602 [2024-07-10 11:00:12.401472] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.602 [2024-07-10 11:00:12.401487] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.602 [2024-07-10 11:00:12.404029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.602 [2024-07-10 11:00:12.413079] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.602 [2024-07-10 11:00:12.413399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.602 [2024-07-10 11:00:12.413617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.602 [2024-07-10 11:00:12.413643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.602 [2024-07-10 11:00:12.413659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.602 [2024-07-10 11:00:12.413788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.602 [2024-07-10 11:00:12.413938] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.602 [2024-07-10 11:00:12.413961] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.602 [2024-07-10 11:00:12.413977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.602 [2024-07-10 11:00:12.416294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.862 [2024-07-10 11:00:12.425918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.862 [2024-07-10 11:00:12.426342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.862 [2024-07-10 11:00:12.426493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.862 [2024-07-10 11:00:12.426523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.862 [2024-07-10 11:00:12.426541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.862 [2024-07-10 11:00:12.426707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.862 [2024-07-10 11:00:12.426875] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.862 [2024-07-10 11:00:12.426899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.862 [2024-07-10 11:00:12.426915] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.862 [2024-07-10 11:00:12.429463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.862 [2024-07-10 11:00:12.438535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.862 [2024-07-10 11:00:12.439027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.862 [2024-07-10 11:00:12.439243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.862 [2024-07-10 11:00:12.439271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.862 [2024-07-10 11:00:12.439289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.862 [2024-07-10 11:00:12.439488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.862 [2024-07-10 11:00:12.439694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.862 [2024-07-10 11:00:12.439718] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.862 [2024-07-10 11:00:12.439733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.862 [2024-07-10 11:00:12.442048] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.862 [2024-07-10 11:00:12.451104] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.862 [2024-07-10 11:00:12.451433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.862 [2024-07-10 11:00:12.451609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.862 [2024-07-10 11:00:12.451639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.862 [2024-07-10 11:00:12.451657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.862 [2024-07-10 11:00:12.451786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.862 [2024-07-10 11:00:12.451991] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.862 [2024-07-10 11:00:12.452014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.862 [2024-07-10 11:00:12.452030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.862 [2024-07-10 11:00:12.454278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.862 [2024-07-10 11:00:12.463582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.862 [2024-07-10 11:00:12.463940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.862 [2024-07-10 11:00:12.464119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.862 [2024-07-10 11:00:12.464144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.862 [2024-07-10 11:00:12.464160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.862 [2024-07-10 11:00:12.464320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.862 [2024-07-10 11:00:12.464534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.862 [2024-07-10 11:00:12.464558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.862 [2024-07-10 11:00:12.464574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.862 [2024-07-10 11:00:12.466855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.862 [2024-07-10 11:00:12.476369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.862 [2024-07-10 11:00:12.476696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.862 [2024-07-10 11:00:12.476899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.862 [2024-07-10 11:00:12.476924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.862 [2024-07-10 11:00:12.476940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.863 [2024-07-10 11:00:12.477087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.863 [2024-07-10 11:00:12.477264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.863 [2024-07-10 11:00:12.477287] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.863 [2024-07-10 11:00:12.477303] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.863 [2024-07-10 11:00:12.479847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.863 [2024-07-10 11:00:12.488649] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.863 [2024-07-10 11:00:12.489011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.489217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.489242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.863 [2024-07-10 11:00:12.489257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.863 [2024-07-10 11:00:12.489387] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.863 [2024-07-10 11:00:12.489589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.863 [2024-07-10 11:00:12.489613] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.863 [2024-07-10 11:00:12.489629] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.863 [2024-07-10 11:00:12.491944] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.863 [2024-07-10 11:00:12.501138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.863 [2024-07-10 11:00:12.501533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.501781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.501832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.863 [2024-07-10 11:00:12.501849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.863 [2024-07-10 11:00:12.501996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.863 [2024-07-10 11:00:12.502128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.863 [2024-07-10 11:00:12.502152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.863 [2024-07-10 11:00:12.502167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.863 [2024-07-10 11:00:12.504545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.863 [2024-07-10 11:00:12.513855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.863 [2024-07-10 11:00:12.514207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.514368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.514396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.863 [2024-07-10 11:00:12.514413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.863 [2024-07-10 11:00:12.514604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.863 [2024-07-10 11:00:12.514755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.863 [2024-07-10 11:00:12.514783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.863 [2024-07-10 11:00:12.514800] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.863 [2024-07-10 11:00:12.517149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.863 [2024-07-10 11:00:12.526418] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.863 [2024-07-10 11:00:12.526826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.527070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.527099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.863 [2024-07-10 11:00:12.527116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.863 [2024-07-10 11:00:12.527264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.863 [2024-07-10 11:00:12.527462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.863 [2024-07-10 11:00:12.527486] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.863 [2024-07-10 11:00:12.527501] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.863 [2024-07-10 11:00:12.529663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.863 [2024-07-10 11:00:12.539121] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.863 [2024-07-10 11:00:12.539542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.539765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.539793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.863 [2024-07-10 11:00:12.539810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.863 [2024-07-10 11:00:12.539975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.863 [2024-07-10 11:00:12.540144] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.863 [2024-07-10 11:00:12.540167] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.863 [2024-07-10 11:00:12.540183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.863 [2024-07-10 11:00:12.542545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.863 [2024-07-10 11:00:12.551714] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.863 [2024-07-10 11:00:12.552097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.552328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.552364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.863 [2024-07-10 11:00:12.552382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.863 [2024-07-10 11:00:12.552596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.863 [2024-07-10 11:00:12.552724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.863 [2024-07-10 11:00:12.552744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.863 [2024-07-10 11:00:12.552780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.863 [2024-07-10 11:00:12.555100] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.863 [2024-07-10 11:00:12.564126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.863 [2024-07-10 11:00:12.564494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.564666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.564708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.863 [2024-07-10 11:00:12.564725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.863 [2024-07-10 11:00:12.564854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.863 [2024-07-10 11:00:12.565003] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.863 [2024-07-10 11:00:12.565027] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.863 [2024-07-10 11:00:12.565043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.863 [2024-07-10 11:00:12.567145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.863 [2024-07-10 11:00:12.576796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.863 [2024-07-10 11:00:12.577153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.577348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.863 [2024-07-10 11:00:12.577376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.863 [2024-07-10 11:00:12.577393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.863 [2024-07-10 11:00:12.577608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.863 [2024-07-10 11:00:12.577789] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.863 [2024-07-10 11:00:12.577814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.863 [2024-07-10 11:00:12.577829] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.863 [2024-07-10 11:00:12.579858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.863 [2024-07-10 11:00:12.589546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.863 [2024-07-10 11:00:12.589846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.590123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.590148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.864 [2024-07-10 11:00:12.590163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.864 [2024-07-10 11:00:12.590365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.864 [2024-07-10 11:00:12.590592] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.864 [2024-07-10 11:00:12.590613] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.864 [2024-07-10 11:00:12.590626] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.864 [2024-07-10 11:00:12.592839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.864 [2024-07-10 11:00:12.602159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.864 [2024-07-10 11:00:12.602491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.602682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.602707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.864 [2024-07-10 11:00:12.602722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.864 [2024-07-10 11:00:12.602870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.864 [2024-07-10 11:00:12.603053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.864 [2024-07-10 11:00:12.603077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.864 [2024-07-10 11:00:12.603092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.864 [2024-07-10 11:00:12.605476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.864 [2024-07-10 11:00:12.614606] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.864 [2024-07-10 11:00:12.614865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.615033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.615060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.864 [2024-07-10 11:00:12.615077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.864 [2024-07-10 11:00:12.615242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.864 [2024-07-10 11:00:12.615438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.864 [2024-07-10 11:00:12.615462] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.864 [2024-07-10 11:00:12.615478] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.864 [2024-07-10 11:00:12.617816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.864 [2024-07-10 11:00:12.627091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.864 [2024-07-10 11:00:12.627448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.627634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.627658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.864 [2024-07-10 11:00:12.627674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.864 [2024-07-10 11:00:12.627881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.864 [2024-07-10 11:00:12.628086] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.864 [2024-07-10 11:00:12.628110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.864 [2024-07-10 11:00:12.628126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.864 [2024-07-10 11:00:12.630550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.864 [2024-07-10 11:00:12.639535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.864 [2024-07-10 11:00:12.639938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.640141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.640168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.864 [2024-07-10 11:00:12.640186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.864 [2024-07-10 11:00:12.640350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.864 [2024-07-10 11:00:12.640539] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.864 [2024-07-10 11:00:12.640560] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.864 [2024-07-10 11:00:12.640574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.864 [2024-07-10 11:00:12.642792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.864 [2024-07-10 11:00:12.651985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.864 [2024-07-10 11:00:12.652330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.652545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.652571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.864 [2024-07-10 11:00:12.652587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.864 [2024-07-10 11:00:12.652785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.864 [2024-07-10 11:00:12.652972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.864 [2024-07-10 11:00:12.652995] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.864 [2024-07-10 11:00:12.653011] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.864 [2024-07-10 11:00:12.655533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.864 [2024-07-10 11:00:12.664443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.864 [2024-07-10 11:00:12.664759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.665036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.665085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.864 [2024-07-10 11:00:12.665102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.864 [2024-07-10 11:00:12.665285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.864 [2024-07-10 11:00:12.665482] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.864 [2024-07-10 11:00:12.665517] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.864 [2024-07-10 11:00:12.665530] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.864 [2024-07-10 11:00:12.667914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.864 [2024-07-10 11:00:12.677393] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.864 [2024-07-10 11:00:12.677761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.677989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.864 [2024-07-10 11:00:12.678043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:55.864 [2024-07-10 11:00:12.678061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:55.864 [2024-07-10 11:00:12.678225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:55.864 [2024-07-10 11:00:12.678375] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.864 [2024-07-10 11:00:12.678399] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.864 [2024-07-10 11:00:12.678414] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.864 [2024-07-10 11:00:12.680650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.124 [2024-07-10 11:00:12.689914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.124 [2024-07-10 11:00:12.690258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-10 11:00:12.690443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-10 11:00:12.690470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.124 [2024-07-10 11:00:12.690486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.124 [2024-07-10 11:00:12.690697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.124 [2024-07-10 11:00:12.690918] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.124 [2024-07-10 11:00:12.690943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.124 [2024-07-10 11:00:12.690959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.124 [2024-07-10 11:00:12.693234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.124 [2024-07-10 11:00:12.702491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.124 [2024-07-10 11:00:12.702865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-10 11:00:12.703049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-10 11:00:12.703075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.124 [2024-07-10 11:00:12.703091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.124 [2024-07-10 11:00:12.703308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.124 [2024-07-10 11:00:12.703471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.124 [2024-07-10 11:00:12.703495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.124 [2024-07-10 11:00:12.703511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.124 [2024-07-10 11:00:12.705843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.124 [2024-07-10 11:00:12.715127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.124 [2024-07-10 11:00:12.715544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-10 11:00:12.715684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.124 [2024-07-10 11:00:12.715718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.124 [2024-07-10 11:00:12.715736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.124 [2024-07-10 11:00:12.715883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.124 [2024-07-10 11:00:12.715998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.124 [2024-07-10 11:00:12.716021] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.124 [2024-07-10 11:00:12.716037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.124 [2024-07-10 11:00:12.718295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.124 [2024-07-10 11:00:12.727703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.124 [2024-07-10 11:00:12.728069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.728253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.728278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.125 [2024-07-10 11:00:12.728294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.125 [2024-07-10 11:00:12.728503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.125 [2024-07-10 11:00:12.728655] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.125 [2024-07-10 11:00:12.728678] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.125 [2024-07-10 11:00:12.728694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.125 [2024-07-10 11:00:12.730932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.125 [2024-07-10 11:00:12.740444] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.125 [2024-07-10 11:00:12.740766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.740986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.741045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.125 [2024-07-10 11:00:12.741062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.125 [2024-07-10 11:00:12.741209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.125 [2024-07-10 11:00:12.741323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.125 [2024-07-10 11:00:12.741346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.125 [2024-07-10 11:00:12.741361] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.125 [2024-07-10 11:00:12.743798] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.125 [2024-07-10 11:00:12.753151] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.125 [2024-07-10 11:00:12.753456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.753627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.753653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.125 [2024-07-10 11:00:12.753673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.125 [2024-07-10 11:00:12.753849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.125 [2024-07-10 11:00:12.754019] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.125 [2024-07-10 11:00:12.754042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.125 [2024-07-10 11:00:12.754057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.125 [2024-07-10 11:00:12.756284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.125 [2024-07-10 11:00:12.765857] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.125 [2024-07-10 11:00:12.766207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.766395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.766422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.125 [2024-07-10 11:00:12.766449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.125 [2024-07-10 11:00:12.766617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.125 [2024-07-10 11:00:12.766804] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.125 [2024-07-10 11:00:12.766828] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.125 [2024-07-10 11:00:12.766844] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.125 [2024-07-10 11:00:12.769085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.125 [2024-07-10 11:00:12.778396] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.125 [2024-07-10 11:00:12.778780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.778952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.778994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.125 [2024-07-10 11:00:12.779012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.125 [2024-07-10 11:00:12.779160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.125 [2024-07-10 11:00:12.779310] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.125 [2024-07-10 11:00:12.779333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.125 [2024-07-10 11:00:12.779349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.125 [2024-07-10 11:00:12.781652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.125 [2024-07-10 11:00:12.791039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.125 [2024-07-10 11:00:12.791336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.791542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.791571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.125 [2024-07-10 11:00:12.791589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.125 [2024-07-10 11:00:12.791744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.125 [2024-07-10 11:00:12.791913] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.125 [2024-07-10 11:00:12.791936] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.125 [2024-07-10 11:00:12.791952] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.125 [2024-07-10 11:00:12.794150] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.125 [2024-07-10 11:00:12.803645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.125 [2024-07-10 11:00:12.804070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.804249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.804288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.125 [2024-07-10 11:00:12.804303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.125 [2024-07-10 11:00:12.804438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.125 [2024-07-10 11:00:12.804647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.125 [2024-07-10 11:00:12.804671] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.125 [2024-07-10 11:00:12.804687] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.125 [2024-07-10 11:00:12.806941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.125 [2024-07-10 11:00:12.816188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.125 [2024-07-10 11:00:12.816504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.816653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.125 [2024-07-10 11:00:12.816681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.125 [2024-07-10 11:00:12.816698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.125 [2024-07-10 11:00:12.816899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.125 [2024-07-10 11:00:12.817085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.125 [2024-07-10 11:00:12.817109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.125 [2024-07-10 11:00:12.817124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.125 [2024-07-10 11:00:12.819320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.125 [2024-07-10 11:00:12.828722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.126 [2024-07-10 11:00:12.829157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.829403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.829438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-07-10 11:00:12.829472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.126 [2024-07-10 11:00:12.829605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.126 [2024-07-10 11:00:12.829809] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.126 [2024-07-10 11:00:12.829833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.126 [2024-07-10 11:00:12.829849] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.126 [2024-07-10 11:00:12.832120] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.126 [2024-07-10 11:00:12.841175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.126 [2024-07-10 11:00:12.841521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.841685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.841710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-07-10 11:00:12.841743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.126 [2024-07-10 11:00:12.841890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.126 [2024-07-10 11:00:12.842076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.126 [2024-07-10 11:00:12.842099] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.126 [2024-07-10 11:00:12.842115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.126 [2024-07-10 11:00:12.844346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.126 [2024-07-10 11:00:12.853838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.126 [2024-07-10 11:00:12.854248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.854430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.854473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-07-10 11:00:12.854489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.126 [2024-07-10 11:00:12.854637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.126 [2024-07-10 11:00:12.854797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.126 [2024-07-10 11:00:12.854820] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.126 [2024-07-10 11:00:12.854836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.126 [2024-07-10 11:00:12.857232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.126 [2024-07-10 11:00:12.866389] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.126 [2024-07-10 11:00:12.866842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.867082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.867108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-07-10 11:00:12.867125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.126 [2024-07-10 11:00:12.867289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.126 [2024-07-10 11:00:12.867486] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.126 [2024-07-10 11:00:12.867515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.126 [2024-07-10 11:00:12.867532] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.126 [2024-07-10 11:00:12.869975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.126 [2024-07-10 11:00:12.879023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.126 [2024-07-10 11:00:12.879356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.879554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.879582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-07-10 11:00:12.879599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.126 [2024-07-10 11:00:12.879800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.126 [2024-07-10 11:00:12.879969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.126 [2024-07-10 11:00:12.879992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.126 [2024-07-10 11:00:12.880008] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.126 [2024-07-10 11:00:12.882270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.126 [2024-07-10 11:00:12.891584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.126 [2024-07-10 11:00:12.891889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.892075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.892117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-07-10 11:00:12.892134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.126 [2024-07-10 11:00:12.892298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.126 [2024-07-10 11:00:12.892509] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.126 [2024-07-10 11:00:12.892530] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.126 [2024-07-10 11:00:12.892543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.126 [2024-07-10 11:00:12.894730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.126 [2024-07-10 11:00:12.904036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.126 [2024-07-10 11:00:12.904407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.904610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.904638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-07-10 11:00:12.904655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.126 [2024-07-10 11:00:12.904874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.126 [2024-07-10 11:00:12.905061] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.126 [2024-07-10 11:00:12.905084] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.126 [2024-07-10 11:00:12.905105] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.126 [2024-07-10 11:00:12.907373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.126 [2024-07-10 11:00:12.916591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.126 [2024-07-10 11:00:12.917020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.917216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.126 [2024-07-10 11:00:12.917243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.126 [2024-07-10 11:00:12.917260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.126 [2024-07-10 11:00:12.917407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.126 [2024-07-10 11:00:12.917589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.126 [2024-07-10 11:00:12.917611] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.126 [2024-07-10 11:00:12.917624] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.127 [2024-07-10 11:00:12.919915] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.127 [2024-07-10 11:00:12.929300] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.127 [2024-07-10 11:00:12.929648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-10 11:00:12.929820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-10 11:00:12.929847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.127 [2024-07-10 11:00:12.929865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.127 [2024-07-10 11:00:12.930011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.127 [2024-07-10 11:00:12.930125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.127 [2024-07-10 11:00:12.930148] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.127 [2024-07-10 11:00:12.930163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.127 [2024-07-10 11:00:12.932563] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.127 [2024-07-10 11:00:12.942136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.127 [2024-07-10 11:00:12.942518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-10 11:00:12.942679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.127 [2024-07-10 11:00:12.942704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.127 [2024-07-10 11:00:12.942719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.127 [2024-07-10 11:00:12.942899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.127 [2024-07-10 11:00:12.943069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.127 [2024-07-10 11:00:12.943093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.127 [2024-07-10 11:00:12.943108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.127 [2024-07-10 11:00:12.945590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.385 [2024-07-10 11:00:12.954723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.385 [2024-07-10 11:00:12.955119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:12.955318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:12.955347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.385 [2024-07-10 11:00:12.955365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.385 [2024-07-10 11:00:12.955565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.385 [2024-07-10 11:00:12.955710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.385 [2024-07-10 11:00:12.955748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.385 [2024-07-10 11:00:12.955764] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.385 [2024-07-10 11:00:12.958133] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.385 [2024-07-10 11:00:12.967223] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.385 [2024-07-10 11:00:12.967627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:12.967867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:12.967892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.385 [2024-07-10 11:00:12.967923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.385 [2024-07-10 11:00:12.968063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.385 [2024-07-10 11:00:12.968232] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.385 [2024-07-10 11:00:12.968256] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.385 [2024-07-10 11:00:12.968272] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.385 [2024-07-10 11:00:12.970577] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.385 [2024-07-10 11:00:12.979767] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.385 [2024-07-10 11:00:12.980154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:12.980344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:12.980369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.385 [2024-07-10 11:00:12.980385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.385 [2024-07-10 11:00:12.980556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.385 [2024-07-10 11:00:12.980733] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.385 [2024-07-10 11:00:12.980756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.385 [2024-07-10 11:00:12.980772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.385 [2024-07-10 11:00:12.983034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.385 [2024-07-10 11:00:12.992586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.385 [2024-07-10 11:00:12.993073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:12.993458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:12.993487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.385 [2024-07-10 11:00:12.993504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.385 [2024-07-10 11:00:12.993674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.385 [2024-07-10 11:00:12.993853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.385 [2024-07-10 11:00:12.993877] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.385 [2024-07-10 11:00:12.993892] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.385 [2024-07-10 11:00:12.996414] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.385 [2024-07-10 11:00:13.005219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.385 [2024-07-10 11:00:13.005570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:13.005724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:13.005749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.385 [2024-07-10 11:00:13.005765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.385 [2024-07-10 11:00:13.005913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.385 [2024-07-10 11:00:13.006141] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.385 [2024-07-10 11:00:13.006164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.385 [2024-07-10 11:00:13.006179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.385 [2024-07-10 11:00:13.008621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.385 [2024-07-10 11:00:13.017843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.385 [2024-07-10 11:00:13.018245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:13.018457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-10 11:00:13.018484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.386 [2024-07-10 11:00:13.018500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.386 [2024-07-10 11:00:13.018667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.386 [2024-07-10 11:00:13.018837] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.386 [2024-07-10 11:00:13.018860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.386 [2024-07-10 11:00:13.018875] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.386 [2024-07-10 11:00:13.021301] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.386 [2024-07-10 11:00:13.030682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.386 [2024-07-10 11:00:13.031031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.031194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.031236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.386 [2024-07-10 11:00:13.031252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.386 [2024-07-10 11:00:13.031453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.386 [2024-07-10 11:00:13.031605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.386 [2024-07-10 11:00:13.031625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.386 [2024-07-10 11:00:13.031639] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.386 [2024-07-10 11:00:13.034181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.386 [2024-07-10 11:00:13.043114] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.386 [2024-07-10 11:00:13.043450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.043656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.043681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.386 [2024-07-10 11:00:13.043697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.386 [2024-07-10 11:00:13.043881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.386 [2024-07-10 11:00:13.044086] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.386 [2024-07-10 11:00:13.044109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.386 [2024-07-10 11:00:13.044125] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.386 [2024-07-10 11:00:13.046530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.386 [2024-07-10 11:00:13.055653] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.386 [2024-07-10 11:00:13.055986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.056196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.056221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.386 [2024-07-10 11:00:13.056237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.386 [2024-07-10 11:00:13.056441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.386 [2024-07-10 11:00:13.056610] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.386 [2024-07-10 11:00:13.056631] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.386 [2024-07-10 11:00:13.056644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.386 [2024-07-10 11:00:13.059169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.386 [2024-07-10 11:00:13.068089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.386 [2024-07-10 11:00:13.068506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.068641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.068669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.386 [2024-07-10 11:00:13.068691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.386 [2024-07-10 11:00:13.068875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.386 [2024-07-10 11:00:13.069079] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.386 [2024-07-10 11:00:13.069103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.386 [2024-07-10 11:00:13.069118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.386 [2024-07-10 11:00:13.071449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.386 [2024-07-10 11:00:13.080616] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.386 [2024-07-10 11:00:13.080923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.081097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.081127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.386 [2024-07-10 11:00:13.081145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.386 [2024-07-10 11:00:13.081310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.386 [2024-07-10 11:00:13.081505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.386 [2024-07-10 11:00:13.081526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.386 [2024-07-10 11:00:13.081539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.386 [2024-07-10 11:00:13.083716] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.386 [2024-07-10 11:00:13.093209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.386 [2024-07-10 11:00:13.093570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.093720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.093748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.386 [2024-07-10 11:00:13.093765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.386 [2024-07-10 11:00:13.093911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.386 [2024-07-10 11:00:13.094080] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.386 [2024-07-10 11:00:13.094103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.386 [2024-07-10 11:00:13.094118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.386 [2024-07-10 11:00:13.096435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.386 [2024-07-10 11:00:13.105806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.386 [2024-07-10 11:00:13.106183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.106358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.106383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.386 [2024-07-10 11:00:13.106414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.386 [2024-07-10 11:00:13.106613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.386 [2024-07-10 11:00:13.106775] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.386 [2024-07-10 11:00:13.106798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.386 [2024-07-10 11:00:13.106813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.386 [2024-07-10 11:00:13.109021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.386 [2024-07-10 11:00:13.118174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.386 [2024-07-10 11:00:13.118547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.118721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-10 11:00:13.118749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.386 [2024-07-10 11:00:13.118766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.386 [2024-07-10 11:00:13.118913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.386 [2024-07-10 11:00:13.119081] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.386 [2024-07-10 11:00:13.119104] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.386 [2024-07-10 11:00:13.119119] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.386 [2024-07-10 11:00:13.121451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.387 [2024-07-10 11:00:13.130887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.387 [2024-07-10 11:00:13.131219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.131382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.131410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.387 [2024-07-10 11:00:13.131437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.387 [2024-07-10 11:00:13.131684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.387 [2024-07-10 11:00:13.131923] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.387 [2024-07-10 11:00:13.131947] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.387 [2024-07-10 11:00:13.131962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.387 [2024-07-10 11:00:13.134370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.387 [2024-07-10 11:00:13.143404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.387 [2024-07-10 11:00:13.143755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.143899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.143926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.387 [2024-07-10 11:00:13.143943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.387 [2024-07-10 11:00:13.144089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.387 [2024-07-10 11:00:13.144281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.387 [2024-07-10 11:00:13.144304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.387 [2024-07-10 11:00:13.144319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.387 [2024-07-10 11:00:13.146843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.387 [2024-07-10 11:00:13.156080] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.387 [2024-07-10 11:00:13.156537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.156700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.156727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.387 [2024-07-10 11:00:13.156744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.387 [2024-07-10 11:00:13.156927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.387 [2024-07-10 11:00:13.157114] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.387 [2024-07-10 11:00:13.157138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.387 [2024-07-10 11:00:13.157153] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.387 [2024-07-10 11:00:13.159541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.387 [2024-07-10 11:00:13.168701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.387 [2024-07-10 11:00:13.169091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.169267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.169296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.387 [2024-07-10 11:00:13.169313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.387 [2024-07-10 11:00:13.169535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.387 [2024-07-10 11:00:13.169659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.387 [2024-07-10 11:00:13.169678] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.387 [2024-07-10 11:00:13.169691] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.387 [2024-07-10 11:00:13.172030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.387 [2024-07-10 11:00:13.181168] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.387 [2024-07-10 11:00:13.181623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.181897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.181927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.387 [2024-07-10 11:00:13.181944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.387 [2024-07-10 11:00:13.182092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.387 [2024-07-10 11:00:13.182296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.387 [2024-07-10 11:00:13.182325] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.387 [2024-07-10 11:00:13.182342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.387 [2024-07-10 11:00:13.184682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.387 [2024-07-10 11:00:13.193514] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.387 [2024-07-10 11:00:13.193871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.194092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.194157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.387 [2024-07-10 11:00:13.194175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.387 [2024-07-10 11:00:13.194393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.387 [2024-07-10 11:00:13.194588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.387 [2024-07-10 11:00:13.194609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.387 [2024-07-10 11:00:13.194622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.387 [2024-07-10 11:00:13.197013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.387 [2024-07-10 11:00:13.206227] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.387 [2024-07-10 11:00:13.206614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.206795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-10 11:00:13.206822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.387 [2024-07-10 11:00:13.206838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.387 [2024-07-10 11:00:13.206970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.387 [2024-07-10 11:00:13.207204] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.387 [2024-07-10 11:00:13.207228] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.387 [2024-07-10 11:00:13.207243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.646 [2024-07-10 11:00:13.209934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.646 [2024-07-10 11:00:13.218613] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.646 [2024-07-10 11:00:13.218970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.646 [2024-07-10 11:00:13.219163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.646 [2024-07-10 11:00:13.219189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.646 [2024-07-10 11:00:13.219205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.646 [2024-07-10 11:00:13.219382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.646 [2024-07-10 11:00:13.219605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.646 [2024-07-10 11:00:13.219626] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.646 [2024-07-10 11:00:13.219644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.646 [2024-07-10 11:00:13.221954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.646 [2024-07-10 11:00:13.231166] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.646 [2024-07-10 11:00:13.231536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.646 [2024-07-10 11:00:13.231663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.646 [2024-07-10 11:00:13.231688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.646 [2024-07-10 11:00:13.231718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.646 [2024-07-10 11:00:13.231893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.646 [2024-07-10 11:00:13.232062] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.646 [2024-07-10 11:00:13.232085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.646 [2024-07-10 11:00:13.232101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.646 [2024-07-10 11:00:13.234463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.646 [2024-07-10 11:00:13.243810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.647 [2024-07-10 11:00:13.244158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.244314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.244339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.647 [2024-07-10 11:00:13.244354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.647 [2024-07-10 11:00:13.244560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.647 [2024-07-10 11:00:13.244739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.647 [2024-07-10 11:00:13.244762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.647 [2024-07-10 11:00:13.244778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.647 [2024-07-10 11:00:13.247097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.647 [2024-07-10 11:00:13.256447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.647 [2024-07-10 11:00:13.256830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.256999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.257026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.647 [2024-07-10 11:00:13.257043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.647 [2024-07-10 11:00:13.257190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.647 [2024-07-10 11:00:13.257395] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.647 [2024-07-10 11:00:13.257418] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.647 [2024-07-10 11:00:13.257444] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.647 [2024-07-10 11:00:13.259791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.647 [2024-07-10 11:00:13.269087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.647 [2024-07-10 11:00:13.269446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.269598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.269624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.647 [2024-07-10 11:00:13.269640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.647 [2024-07-10 11:00:13.269847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.647 [2024-07-10 11:00:13.269998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.647 [2024-07-10 11:00:13.270021] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.647 [2024-07-10 11:00:13.270037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.647 [2024-07-10 11:00:13.272086] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.647 [2024-07-10 11:00:13.281635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.647 [2024-07-10 11:00:13.281970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.282214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.282259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.647 [2024-07-10 11:00:13.282276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.647 [2024-07-10 11:00:13.282423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.647 [2024-07-10 11:00:13.282583] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.647 [2024-07-10 11:00:13.282603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.647 [2024-07-10 11:00:13.282616] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.647 [2024-07-10 11:00:13.284614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.647 [2024-07-10 11:00:13.294075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.647 [2024-07-10 11:00:13.294510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.294771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.294817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.647 [2024-07-10 11:00:13.294834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.647 [2024-07-10 11:00:13.294963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.647 [2024-07-10 11:00:13.295167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.647 [2024-07-10 11:00:13.295190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.647 [2024-07-10 11:00:13.295206] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.647 [2024-07-10 11:00:13.297506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.647 [2024-07-10 11:00:13.306566] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.647 [2024-07-10 11:00:13.306915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.307053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.307079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.647 [2024-07-10 11:00:13.307095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.647 [2024-07-10 11:00:13.307303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.647 [2024-07-10 11:00:13.307505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.647 [2024-07-10 11:00:13.307526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.647 [2024-07-10 11:00:13.307540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.647 [2024-07-10 11:00:13.309735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.647 [2024-07-10 11:00:13.318930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.647 [2024-07-10 11:00:13.319240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.319404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.647 [2024-07-10 11:00:13.319435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.647 [2024-07-10 11:00:13.319452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.647 [2024-07-10 11:00:13.319585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.647 [2024-07-10 11:00:13.319756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.648 [2024-07-10 11:00:13.319779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.648 [2024-07-10 11:00:13.319793] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.648 [2024-07-10 11:00:13.321973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.648 [2024-07-10 11:00:13.331432] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.648 [2024-07-10 11:00:13.331786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.331999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.332027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.648 [2024-07-10 11:00:13.332044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.648 [2024-07-10 11:00:13.332244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.648 [2024-07-10 11:00:13.332377] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.648 [2024-07-10 11:00:13.332400] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.648 [2024-07-10 11:00:13.332416] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.648 [2024-07-10 11:00:13.334795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.648 [2024-07-10 11:00:13.344174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.648 [2024-07-10 11:00:13.344502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.344689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.344731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.648 [2024-07-10 11:00:13.344749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.648 [2024-07-10 11:00:13.344877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.648 [2024-07-10 11:00:13.345047] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.648 [2024-07-10 11:00:13.345070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.648 [2024-07-10 11:00:13.345085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.648 [2024-07-10 11:00:13.347491] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.648 [2024-07-10 11:00:13.356682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.648 [2024-07-10 11:00:13.357088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.357245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.357274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.648 [2024-07-10 11:00:13.357291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.648 [2024-07-10 11:00:13.357503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.648 [2024-07-10 11:00:13.357623] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.648 [2024-07-10 11:00:13.357644] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.648 [2024-07-10 11:00:13.357658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.648 [2024-07-10 11:00:13.359855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.648 [2024-07-10 11:00:13.368828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.648 [2024-07-10 11:00:13.369186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.369366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.369391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.648 [2024-07-10 11:00:13.369407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.648 [2024-07-10 11:00:13.369562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.648 [2024-07-10 11:00:13.369699] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.648 [2024-07-10 11:00:13.369737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.648 [2024-07-10 11:00:13.369752] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.648 [2024-07-10 11:00:13.371948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.648 [2024-07-10 11:00:13.381665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.648 [2024-07-10 11:00:13.382077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.382267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.382298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.648 [2024-07-10 11:00:13.382314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.648 [2024-07-10 11:00:13.382455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.648 [2024-07-10 11:00:13.382608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.648 [2024-07-10 11:00:13.382629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.648 [2024-07-10 11:00:13.382643] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.648 [2024-07-10 11:00:13.384937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.648 [2024-07-10 11:00:13.394582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.648 [2024-07-10 11:00:13.394997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.395169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.395198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.648 [2024-07-10 11:00:13.395216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.648 [2024-07-10 11:00:13.395363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.648 [2024-07-10 11:00:13.395565] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.648 [2024-07-10 11:00:13.395587] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.648 [2024-07-10 11:00:13.395601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.648 [2024-07-10 11:00:13.397882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.648 [2024-07-10 11:00:13.407087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.648 [2024-07-10 11:00:13.407407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.407598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.407624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.648 [2024-07-10 11:00:13.407639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.648 [2024-07-10 11:00:13.407780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.648 [2024-07-10 11:00:13.407966] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.648 [2024-07-10 11:00:13.407990] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.648 [2024-07-10 11:00:13.408005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.648 [2024-07-10 11:00:13.410146] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.648 [2024-07-10 11:00:13.419823] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.648 [2024-07-10 11:00:13.420262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.648 [2024-07-10 11:00:13.420503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.649 [2024-07-10 11:00:13.420529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.649 [2024-07-10 11:00:13.420550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.649 [2024-07-10 11:00:13.420744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.649 [2024-07-10 11:00:13.420903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.649 [2024-07-10 11:00:13.420926] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.649 [2024-07-10 11:00:13.420941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.649 [2024-07-10 11:00:13.423189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.649 [2024-07-10 11:00:13.432252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.649 [2024-07-10 11:00:13.432566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.649 [2024-07-10 11:00:13.432754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.649 [2024-07-10 11:00:13.432782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.649 [2024-07-10 11:00:13.432799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.649 [2024-07-10 11:00:13.433000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.649 [2024-07-10 11:00:13.433132] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.649 [2024-07-10 11:00:13.433155] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.649 [2024-07-10 11:00:13.433170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.649 [2024-07-10 11:00:13.435302] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.649 [2024-07-10 11:00:13.444661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.649 [2024-07-10 11:00:13.445131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.649 [2024-07-10 11:00:13.445351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.649 [2024-07-10 11:00:13.445379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.649 [2024-07-10 11:00:13.445396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.649 [2024-07-10 11:00:13.445588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.649 [2024-07-10 11:00:13.445753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.649 [2024-07-10 11:00:13.445777] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.649 [2024-07-10 11:00:13.445792] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.649 [2024-07-10 11:00:13.447913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.649 [2024-07-10 11:00:13.457493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.649 [2024-07-10 11:00:13.457857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.649 [2024-07-10 11:00:13.458055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.649 [2024-07-10 11:00:13.458102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.649 [2024-07-10 11:00:13.458120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.649 [2024-07-10 11:00:13.458290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.649 [2024-07-10 11:00:13.458469] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.649 [2024-07-10 11:00:13.458494] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.649 [2024-07-10 11:00:13.458510] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.649 [2024-07-10 11:00:13.460970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.908 [2024-07-10 11:00:13.470282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.908 [2024-07-10 11:00:13.470659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.908 [2024-07-10 11:00:13.470849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.908 [2024-07-10 11:00:13.470890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.908 [2024-07-10 11:00:13.470909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.908 [2024-07-10 11:00:13.471038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.908 [2024-07-10 11:00:13.471224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.908 [2024-07-10 11:00:13.471248] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.908 [2024-07-10 11:00:13.471263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.908 [2024-07-10 11:00:13.473651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.908 [2024-07-10 11:00:13.482852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.908 [2024-07-10 11:00:13.483213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.908 [2024-07-10 11:00:13.483411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.908 [2024-07-10 11:00:13.483450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.908 [2024-07-10 11:00:13.483469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.908 [2024-07-10 11:00:13.483651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.908 [2024-07-10 11:00:13.483820] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.908 [2024-07-10 11:00:13.483844] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.908 [2024-07-10 11:00:13.483859] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.908 [2024-07-10 11:00:13.486337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.908 [2024-07-10 11:00:13.495281] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.908 [2024-07-10 11:00:13.495643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.908 [2024-07-10 11:00:13.495853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.908 [2024-07-10 11:00:13.495878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.908 [2024-07-10 11:00:13.495894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.908 [2024-07-10 11:00:13.496065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.908 [2024-07-10 11:00:13.496208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.908 [2024-07-10 11:00:13.496233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.908 [2024-07-10 11:00:13.496248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.908 [2024-07-10 11:00:13.498483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.908 [2024-07-10 11:00:13.507832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.908 [2024-07-10 11:00:13.508180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.908 [2024-07-10 11:00:13.508338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.908 [2024-07-10 11:00:13.508364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.908 [2024-07-10 11:00:13.508396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.908 [2024-07-10 11:00:13.508535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.908 [2024-07-10 11:00:13.508723] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.908 [2024-07-10 11:00:13.508746] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.908 [2024-07-10 11:00:13.508762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.908 [2024-07-10 11:00:13.511019] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.908 [2024-07-10 11:00:13.520343] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.909 [2024-07-10 11:00:13.520707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.520876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.520901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.909 [2024-07-10 11:00:13.520917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.909 [2024-07-10 11:00:13.521075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.909 [2024-07-10 11:00:13.521302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.909 [2024-07-10 11:00:13.521326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.909 [2024-07-10 11:00:13.521342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.909 [2024-07-10 11:00:13.523868] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.909 [2024-07-10 11:00:13.532812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.909 [2024-07-10 11:00:13.533175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.533374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.533399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.909 [2024-07-10 11:00:13.533415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.909 [2024-07-10 11:00:13.533590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.909 [2024-07-10 11:00:13.533778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.909 [2024-07-10 11:00:13.533806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.909 [2024-07-10 11:00:13.533823] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.909 [2024-07-10 11:00:13.535994] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.909 [2024-07-10 11:00:13.545591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.909 [2024-07-10 11:00:13.545934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.546197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.546242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.909 [2024-07-10 11:00:13.546259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.909 [2024-07-10 11:00:13.546453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.909 [2024-07-10 11:00:13.546605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.909 [2024-07-10 11:00:13.546628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.909 [2024-07-10 11:00:13.546644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.909 [2024-07-10 11:00:13.548921] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.909 [2024-07-10 11:00:13.558218] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.909 [2024-07-10 11:00:13.558595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.558734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.558760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.909 [2024-07-10 11:00:13.558777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.909 [2024-07-10 11:00:13.558960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.909 [2024-07-10 11:00:13.559128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.909 [2024-07-10 11:00:13.559151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.909 [2024-07-10 11:00:13.559167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.909 [2024-07-10 11:00:13.561413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.909 [2024-07-10 11:00:13.570879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.909 [2024-07-10 11:00:13.571220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.571395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.571434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.909 [2024-07-10 11:00:13.571454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.909 [2024-07-10 11:00:13.571620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.909 [2024-07-10 11:00:13.571770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.909 [2024-07-10 11:00:13.571793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.909 [2024-07-10 11:00:13.571814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.909 [2024-07-10 11:00:13.574131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.909 [2024-07-10 11:00:13.583415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.909 [2024-07-10 11:00:13.583853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.584083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.584111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.909 [2024-07-10 11:00:13.584128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.909 [2024-07-10 11:00:13.584257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.909 [2024-07-10 11:00:13.584472] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.909 [2024-07-10 11:00:13.584496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.909 [2024-07-10 11:00:13.584512] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.909 [2024-07-10 11:00:13.586680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.909 [2024-07-10 11:00:13.596068] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.909 [2024-07-10 11:00:13.596394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.596591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.596620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.909 [2024-07-10 11:00:13.596638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.909 [2024-07-10 11:00:13.596803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.909 [2024-07-10 11:00:13.597025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.909 [2024-07-10 11:00:13.597049] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.909 [2024-07-10 11:00:13.597064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.909 [2024-07-10 11:00:13.599055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.909 [2024-07-10 11:00:13.608774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.909 [2024-07-10 11:00:13.609137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.609319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.909 [2024-07-10 11:00:13.609344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.909 [2024-07-10 11:00:13.609360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.909 [2024-07-10 11:00:13.609519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.909 [2024-07-10 11:00:13.609721] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.909 [2024-07-10 11:00:13.609745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.909 [2024-07-10 11:00:13.609761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.909 [2024-07-10 11:00:13.611914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.909 [2024-07-10 11:00:13.621388] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.909 [2024-07-10 11:00:13.621761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.621964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.621990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.910 [2024-07-10 11:00:13.622005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.910 [2024-07-10 11:00:13.622167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.910 [2024-07-10 11:00:13.622355] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.910 [2024-07-10 11:00:13.622379] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.910 [2024-07-10 11:00:13.622395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.910 [2024-07-10 11:00:13.624722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.910 [2024-07-10 11:00:13.633903] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.910 [2024-07-10 11:00:13.634275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.634453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.634479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.910 [2024-07-10 11:00:13.634495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.910 [2024-07-10 11:00:13.634670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.910 [2024-07-10 11:00:13.634839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.910 [2024-07-10 11:00:13.634863] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.910 [2024-07-10 11:00:13.634878] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.910 [2024-07-10 11:00:13.637301] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.910 [2024-07-10 11:00:13.646436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.910 [2024-07-10 11:00:13.646776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.647025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.647053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.910 [2024-07-10 11:00:13.647070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.910 [2024-07-10 11:00:13.647217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.910 [2024-07-10 11:00:13.647402] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.910 [2024-07-10 11:00:13.647435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.910 [2024-07-10 11:00:13.647453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.910 [2024-07-10 11:00:13.649895] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.910 [2024-07-10 11:00:13.659182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.910 [2024-07-10 11:00:13.659670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.659884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.659912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.910 [2024-07-10 11:00:13.659929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.910 [2024-07-10 11:00:13.660093] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.910 [2024-07-10 11:00:13.660280] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.910 [2024-07-10 11:00:13.660303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.910 [2024-07-10 11:00:13.660318] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.910 [2024-07-10 11:00:13.662534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.910 [2024-07-10 11:00:13.671459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.910 [2024-07-10 11:00:13.671807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.671979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.672008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.910 [2024-07-10 11:00:13.672026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.910 [2024-07-10 11:00:13.672173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.910 [2024-07-10 11:00:13.672359] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.910 [2024-07-10 11:00:13.672383] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.910 [2024-07-10 11:00:13.672398] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.910 [2024-07-10 11:00:13.674684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.910 [2024-07-10 11:00:13.683933] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.910 [2024-07-10 11:00:13.684324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.684564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.684593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.910 [2024-07-10 11:00:13.684610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.910 [2024-07-10 11:00:13.684738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.910 [2024-07-10 11:00:13.684906] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.910 [2024-07-10 11:00:13.684929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.910 [2024-07-10 11:00:13.684945] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.910 [2024-07-10 11:00:13.687242] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.910 [2024-07-10 11:00:13.696635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.910 [2024-07-10 11:00:13.697019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.697176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.697201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.910 [2024-07-10 11:00:13.697215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.910 [2024-07-10 11:00:13.697451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.910 [2024-07-10 11:00:13.697639] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.910 [2024-07-10 11:00:13.697663] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.910 [2024-07-10 11:00:13.697678] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.910 [2024-07-10 11:00:13.700101] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.910 [2024-07-10 11:00:13.709048] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.910 [2024-07-10 11:00:13.709359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.709523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.709552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.910 [2024-07-10 11:00:13.709569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.910 [2024-07-10 11:00:13.709770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.910 [2024-07-10 11:00:13.709939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.910 [2024-07-10 11:00:13.709962] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.910 [2024-07-10 11:00:13.709978] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.910 [2024-07-10 11:00:13.712185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.910 [2024-07-10 11:00:13.721620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.910 [2024-07-10 11:00:13.722008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.722178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.910 [2024-07-10 11:00:13.722206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:56.910 [2024-07-10 11:00:13.722223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:56.910 [2024-07-10 11:00:13.722423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:56.910 [2024-07-10 11:00:13.722603] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.910 [2024-07-10 11:00:13.722626] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.910 [2024-07-10 11:00:13.722642] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.911 [2024-07-10 11:00:13.724884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.170 [2024-07-10 11:00:13.734391] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.170 [2024-07-10 11:00:13.734737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3582432 Killed "${NVMF_APP[@]}" "$@" 00:29:57.170 [2024-07-10 11:00:13.734954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.735008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.170 [2024-07-10 11:00:13.735026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.170 [2024-07-10 11:00:13.735227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.170 11:00:13 -- host/bdevperf.sh@36 -- # tgt_init 00:29:57.170 [2024-07-10 11:00:13.735415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.170 [2024-07-10 11:00:13.735449] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.170 [2024-07-10 11:00:13.735466] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.170 11:00:13 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:57.170 11:00:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:57.170 11:00:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:57.170 11:00:13 -- common/autotest_common.sh@10 -- # set +x 00:29:57.170 [2024-07-10 11:00:13.737833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.170 11:00:13 -- nvmf/common.sh@469 -- # nvmfpid=3584046 00:29:57.170 11:00:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:57.170 11:00:13 -- nvmf/common.sh@470 -- # waitforlisten 3584046 00:29:57.170 11:00:13 -- common/autotest_common.sh@819 -- # '[' -z 3584046 ']' 00:29:57.170 11:00:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.170 11:00:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:57.170 11:00:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.170 11:00:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:57.170 11:00:13 -- common/autotest_common.sh@10 -- # set +x 00:29:57.170 [2024-07-10 11:00:13.747044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.170 [2024-07-10 11:00:13.747370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.747547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.747576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.170 [2024-07-10 11:00:13.747593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.170 [2024-07-10 11:00:13.747759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.170 [2024-07-10 11:00:13.747946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.170 [2024-07-10 11:00:13.747969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.170 [2024-07-10 11:00:13.747986] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.170 [2024-07-10 11:00:13.750304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.170 [2024-07-10 11:00:13.759291] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.170 [2024-07-10 11:00:13.759624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.759766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.759792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.170 [2024-07-10 11:00:13.759807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.170 [2024-07-10 11:00:13.759977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.170 [2024-07-10 11:00:13.760145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.170 [2024-07-10 11:00:13.760167] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.170 [2024-07-10 11:00:13.760181] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.170 [2024-07-10 11:00:13.762090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.170 [2024-07-10 11:00:13.771641] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.170 [2024-07-10 11:00:13.772021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.772205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.772230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.170 [2024-07-10 11:00:13.772246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.170 [2024-07-10 11:00:13.772435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.170 [2024-07-10 11:00:13.772607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.170 [2024-07-10 11:00:13.772628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.170 [2024-07-10 11:00:13.772642] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.170 [2024-07-10 11:00:13.774789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.170 [2024-07-10 11:00:13.781586] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:57.170 [2024-07-10 11:00:13.781659] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.170 [2024-07-10 11:00:13.783990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.170 [2024-07-10 11:00:13.784331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.784464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.784490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.170 [2024-07-10 11:00:13.784506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.170 [2024-07-10 11:00:13.784622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.170 [2024-07-10 11:00:13.784796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.170 [2024-07-10 11:00:13.784816] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.170 [2024-07-10 11:00:13.784830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.170 [2024-07-10 11:00:13.786932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.170 [2024-07-10 11:00:13.796602] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.170 [2024-07-10 11:00:13.797031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.797169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.797195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.170 [2024-07-10 11:00:13.797216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.170 [2024-07-10 11:00:13.797366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.170 [2024-07-10 11:00:13.797574] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.170 [2024-07-10 11:00:13.797597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.170 [2024-07-10 11:00:13.797611] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.170 [2024-07-10 11:00:13.799753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.170 [2024-07-10 11:00:13.808885] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.170 [2024-07-10 11:00:13.809333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.809514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-10 11:00:13.809541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.170 [2024-07-10 11:00:13.809557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.170 [2024-07-10 11:00:13.809738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.170 [2024-07-10 11:00:13.809879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.171 [2024-07-10 11:00:13.809899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.171 [2024-07-10 11:00:13.809912] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.171 [2024-07-10 11:00:13.811990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.171 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.171 [2024-07-10 11:00:13.821075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.171 [2024-07-10 11:00:13.821461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.821599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.821624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.171 [2024-07-10 11:00:13.821639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.171 [2024-07-10 11:00:13.821811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.171 [2024-07-10 11:00:13.821991] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.171 [2024-07-10 11:00:13.822027] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.171 [2024-07-10 11:00:13.822041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.171 [2024-07-10 11:00:13.824273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.171 [2024-07-10 11:00:13.833774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.171 [2024-07-10 11:00:13.834145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.834318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.834345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.171 [2024-07-10 11:00:13.834368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.171 [2024-07-10 11:00:13.834502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.171 [2024-07-10 11:00:13.834671] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.171 [2024-07-10 11:00:13.834692] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.171 [2024-07-10 11:00:13.834706] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.171 [2024-07-10 11:00:13.837086] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.171 [2024-07-10 11:00:13.846531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.171 [2024-07-10 11:00:13.846940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.847128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.847153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.171 [2024-07-10 11:00:13.847168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.171 [2024-07-10 11:00:13.847331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.171 [2024-07-10 11:00:13.847501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.171 [2024-07-10 11:00:13.847523] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.171 [2024-07-10 11:00:13.847537] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.171 [2024-07-10 11:00:13.849892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.171 [2024-07-10 11:00:13.853892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:57.171 [2024-07-10 11:00:13.859354] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.171 [2024-07-10 11:00:13.859832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.860021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.860049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.171 [2024-07-10 11:00:13.860068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.171 [2024-07-10 11:00:13.860253] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.171 [2024-07-10 11:00:13.860493] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.171 [2024-07-10 11:00:13.860514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.171 [2024-07-10 11:00:13.860529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.171 [2024-07-10 11:00:13.863021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.171 [2024-07-10 11:00:13.871931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.171 [2024-07-10 11:00:13.872370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.872569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.872596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.171 [2024-07-10 11:00:13.872613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.171 [2024-07-10 11:00:13.872806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.171 [2024-07-10 11:00:13.872996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.171 [2024-07-10 11:00:13.873019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.171 [2024-07-10 11:00:13.873038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.171 [2024-07-10 11:00:13.875308] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.171 [2024-07-10 11:00:13.884454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.171 [2024-07-10 11:00:13.884864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.885032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.885057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.171 [2024-07-10 11:00:13.885072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.171 [2024-07-10 11:00:13.885269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.171 [2024-07-10 11:00:13.885402] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.171 [2024-07-10 11:00:13.885443] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.171 [2024-07-10 11:00:13.885460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.171 [2024-07-10 11:00:13.887779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.171 [2024-07-10 11:00:13.896971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.171 [2024-07-10 11:00:13.897395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.897569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.897596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.171 [2024-07-10 11:00:13.897612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.171 [2024-07-10 11:00:13.897802] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.171 [2024-07-10 11:00:13.897990] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.171 [2024-07-10 11:00:13.898014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.171 [2024-07-10 11:00:13.898031] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.171 [2024-07-10 11:00:13.900499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.171 [2024-07-10 11:00:13.909599] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.171 [2024-07-10 11:00:13.910072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.910340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.910369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.171 [2024-07-10 11:00:13.910389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.171 [2024-07-10 11:00:13.910567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.171 [2024-07-10 11:00:13.910783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.171 [2024-07-10 11:00:13.910808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.171 [2024-07-10 11:00:13.910826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.171 [2024-07-10 11:00:13.912993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.171 [2024-07-10 11:00:13.922199] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.171 [2024-07-10 11:00:13.922560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-10 11:00:13.922729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-10 11:00:13.922754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.172 [2024-07-10 11:00:13.922771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.172 [2024-07-10 11:00:13.922977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.172 [2024-07-10 11:00:13.923165] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.172 [2024-07-10 11:00:13.923189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.172 [2024-07-10 11:00:13.923206] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.172 [2024-07-10 11:00:13.925493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.172 [2024-07-10 11:00:13.934510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.172 [2024-07-10 11:00:13.934872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-10 11:00:13.935022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-10 11:00:13.935050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.172 [2024-07-10 11:00:13.935068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.172 [2024-07-10 11:00:13.935216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.172 [2024-07-10 11:00:13.935367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.172 [2024-07-10 11:00:13.935391] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.172 [2024-07-10 11:00:13.935408] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.172 [2024-07-10 11:00:13.937854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.172 [2024-07-10 11:00:13.945614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:57.172 [2024-07-10 11:00:13.945747] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.172 [2024-07-10 11:00:13.945767] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.172 [2024-07-10 11:00:13.945780] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.172 [2024-07-10 11:00:13.945843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.172 [2024-07-10 11:00:13.945870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.172 [2024-07-10 11:00:13.945873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.172 [2024-07-10 11:00:13.946786] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.172 [2024-07-10 11:00:13.947130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-10 11:00:13.947325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-10 11:00:13.947350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.172 [2024-07-10 11:00:13.947367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.172 [2024-07-10 11:00:13.947542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.172 [2024-07-10 11:00:13.947680] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.172 [2024-07-10 11:00:13.947701] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.172 [2024-07-10 11:00:13.947725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.172 [2024-07-10 11:00:13.949945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.172 [2024-07-10 11:00:13.959140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.172 [2024-07-10 11:00:13.959632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-10 11:00:13.959884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-10 11:00:13.959910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.172 [2024-07-10 11:00:13.959929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.172 [2024-07-10 11:00:13.960118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.172 [2024-07-10 11:00:13.960254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.172 [2024-07-10 11:00:13.960275] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.172 [2024-07-10 11:00:13.960291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.172 [2024-07-10 11:00:13.962399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.172 [2024-07-10 11:00:13.971686] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.172 [2024-07-10 11:00:13.972223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-10 11:00:13.972443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-10 11:00:13.972470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.172 [2024-07-10 11:00:13.972489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.172 [2024-07-10 11:00:13.972646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.172 [2024-07-10 11:00:13.972841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.172 [2024-07-10 11:00:13.972863] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.172 [2024-07-10 11:00:13.972879] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.172 [2024-07-10 11:00:13.974944] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.172 [2024-07-10 11:00:13.983908] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.172 [2024-07-10 11:00:13.984334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-10 11:00:13.984536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-10 11:00:13.984564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.172 [2024-07-10 11:00:13.984582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.172 [2024-07-10 11:00:13.984803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.172 [2024-07-10 11:00:13.985008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.172 [2024-07-10 11:00:13.985029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.172 [2024-07-10 11:00:13.985046] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.172 [2024-07-10 11:00:13.987102] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.431 [2024-07-10 11:00:13.996283] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.431 [2024-07-10 11:00:13.996729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:13.996916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:13.996942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.431 [2024-07-10 11:00:13.996960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.431 [2024-07-10 11:00:13.997083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.431 [2024-07-10 11:00:13.997250] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.431 [2024-07-10 11:00:13.997271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.431 [2024-07-10 11:00:13.997287] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.431 [2024-07-10 11:00:13.999563] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.431 [2024-07-10 11:00:14.008666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.431 [2024-07-10 11:00:14.009162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.009320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.009346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.431 [2024-07-10 11:00:14.009365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.431 [2024-07-10 11:00:14.009553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.431 [2024-07-10 11:00:14.009726] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.431 [2024-07-10 11:00:14.009748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.431 [2024-07-10 11:00:14.009764] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.431 [2024-07-10 11:00:14.011777] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.431 [2024-07-10 11:00:14.021198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.431 [2024-07-10 11:00:14.021656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.021819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.021845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.431 [2024-07-10 11:00:14.021873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.431 [2024-07-10 11:00:14.022093] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.431 [2024-07-10 11:00:14.022228] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.431 [2024-07-10 11:00:14.022248] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.431 [2024-07-10 11:00:14.022264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.431 [2024-07-10 11:00:14.024312] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.431 [2024-07-10 11:00:14.033395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.431 [2024-07-10 11:00:14.033794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.033965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.033990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.431 [2024-07-10 11:00:14.034006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.431 [2024-07-10 11:00:14.034171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.431 [2024-07-10 11:00:14.034336] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.431 [2024-07-10 11:00:14.034357] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.431 [2024-07-10 11:00:14.034370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.431 [2024-07-10 11:00:14.036764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.431 [2024-07-10 11:00:14.045569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.431 [2024-07-10 11:00:14.045931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.046065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.046090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.431 [2024-07-10 11:00:14.046106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.431 [2024-07-10 11:00:14.046239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.431 [2024-07-10 11:00:14.046403] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.431 [2024-07-10 11:00:14.046446] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.431 [2024-07-10 11:00:14.046463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.431 [2024-07-10 11:00:14.048451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.431 [2024-07-10 11:00:14.057901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.431 [2024-07-10 11:00:14.058219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.058370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.058396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.431 [2024-07-10 11:00:14.058412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.431 [2024-07-10 11:00:14.058608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.431 [2024-07-10 11:00:14.058744] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.431 [2024-07-10 11:00:14.058780] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.431 [2024-07-10 11:00:14.058794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.431 [2024-07-10 11:00:14.061003] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.431 [2024-07-10 11:00:14.070262] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.431 [2024-07-10 11:00:14.070583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.070715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.070740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.431 [2024-07-10 11:00:14.070756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.431 [2024-07-10 11:00:14.070888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.431 [2024-07-10 11:00:14.071070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.431 [2024-07-10 11:00:14.071090] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.431 [2024-07-10 11:00:14.071104] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.431 [2024-07-10 11:00:14.072968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.431 [2024-07-10 11:00:14.082565] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.431 [2024-07-10 11:00:14.082911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.083051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.083077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.431 [2024-07-10 11:00:14.083094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.431 [2024-07-10 11:00:14.083273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.431 [2024-07-10 11:00:14.083462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.431 [2024-07-10 11:00:14.083484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.431 [2024-07-10 11:00:14.083498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.431 [2024-07-10 11:00:14.085621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.431 [2024-07-10 11:00:14.094915] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.431 [2024-07-10 11:00:14.095275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.095432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.095458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.431 [2024-07-10 11:00:14.095474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.431 [2024-07-10 11:00:14.095639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.431 [2024-07-10 11:00:14.095857] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.431 [2024-07-10 11:00:14.095878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.431 [2024-07-10 11:00:14.095891] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.431 [2024-07-10 11:00:14.097890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.431 [2024-07-10 11:00:14.107160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.431 [2024-07-10 11:00:14.107487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.107651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.107676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.431 [2024-07-10 11:00:14.107691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.431 [2024-07-10 11:00:14.107807] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.431 [2024-07-10 11:00:14.107911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.431 [2024-07-10 11:00:14.107946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.431 [2024-07-10 11:00:14.107960] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.431 [2024-07-10 11:00:14.109882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.431 [2024-07-10 11:00:14.119509] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.431 [2024-07-10 11:00:14.119881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.120025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.431 [2024-07-10 11:00:14.120050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.431 [2024-07-10 11:00:14.120066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.432 [2024-07-10 11:00:14.120230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.432 [2024-07-10 11:00:14.120394] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.432 [2024-07-10 11:00:14.120414] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.432 [2024-07-10 11:00:14.120452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.432 [2024-07-10 11:00:14.122596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.432 [2024-07-10 11:00:14.131895] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.432 [2024-07-10 11:00:14.132195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.132344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.132369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.432 [2024-07-10 11:00:14.132385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.432 [2024-07-10 11:00:14.132541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.432 [2024-07-10 11:00:14.132693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.432 [2024-07-10 11:00:14.132720] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.432 [2024-07-10 11:00:14.132749] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.432 [2024-07-10 11:00:14.134843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.432 [2024-07-10 11:00:14.144269] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.432 [2024-07-10 11:00:14.144620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.144782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.144807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.432 [2024-07-10 11:00:14.144822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.432 [2024-07-10 11:00:14.144922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.432 [2024-07-10 11:00:14.145088] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.432 [2024-07-10 11:00:14.145108] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.432 [2024-07-10 11:00:14.145122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.432 [2024-07-10 11:00:14.147062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.432 [2024-07-10 11:00:14.156574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.432 [2024-07-10 11:00:14.156933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.157102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.157128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.432 [2024-07-10 11:00:14.157143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.432 [2024-07-10 11:00:14.157275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.432 [2024-07-10 11:00:14.157422] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.432 [2024-07-10 11:00:14.157468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.432 [2024-07-10 11:00:14.157482] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.432 [2024-07-10 11:00:14.159658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.432 [2024-07-10 11:00:14.168677] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.432 [2024-07-10 11:00:14.169043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.169199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.169224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.432 [2024-07-10 11:00:14.169240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.432 [2024-07-10 11:00:14.169371] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.432 [2024-07-10 11:00:14.169531] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.432 [2024-07-10 11:00:14.169553] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.432 [2024-07-10 11:00:14.169572] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.432 [2024-07-10 11:00:14.171792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.432 [2024-07-10 11:00:14.180720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.432 [2024-07-10 11:00:14.181011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.181162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.181187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.432 [2024-07-10 11:00:14.181203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.432 [2024-07-10 11:00:14.181334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.432 [2024-07-10 11:00:14.181495] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.432 [2024-07-10 11:00:14.181517] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.432 [2024-07-10 11:00:14.181531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.432 [2024-07-10 11:00:14.183654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.432 [2024-07-10 11:00:14.193200] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.432 [2024-07-10 11:00:14.193511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.193639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.193664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.432 [2024-07-10 11:00:14.193679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.432 [2024-07-10 11:00:14.193795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.432 [2024-07-10 11:00:14.193929] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.432 [2024-07-10 11:00:14.193949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.432 [2024-07-10 11:00:14.193963] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.432 [2024-07-10 11:00:14.195994] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.432 [2024-07-10 11:00:14.205449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.432 [2024-07-10 11:00:14.205727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.205893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.205918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.432 [2024-07-10 11:00:14.205934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.432 [2024-07-10 11:00:14.206082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.432 [2024-07-10 11:00:14.206199] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.432 [2024-07-10 11:00:14.206219] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.432 [2024-07-10 11:00:14.206233] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.432 [2024-07-10 11:00:14.208476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.432 [2024-07-10 11:00:14.217850] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.432 [2024-07-10 11:00:14.218131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.218289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.218315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.432 [2024-07-10 11:00:14.218331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.432 [2024-07-10 11:00:14.218504] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.432 [2024-07-10 11:00:14.218722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.432 [2024-07-10 11:00:14.218743] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.432 [2024-07-10 11:00:14.218757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.432 [2024-07-10 11:00:14.220836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.432 [2024-07-10 11:00:14.230086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.432 [2024-07-10 11:00:14.230422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.230578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.230603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.432 [2024-07-10 11:00:14.230618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.432 [2024-07-10 11:00:14.230783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.432 [2024-07-10 11:00:14.230900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.432 [2024-07-10 11:00:14.230921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.432 [2024-07-10 11:00:14.230935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.432 [2024-07-10 11:00:14.232904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.432 [2024-07-10 11:00:14.242525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.432 [2024-07-10 11:00:14.242786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.242965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.432 [2024-07-10 11:00:14.242990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.432 [2024-07-10 11:00:14.243006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.432 [2024-07-10 11:00:14.243138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.432 [2024-07-10 11:00:14.243303] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.432 [2024-07-10 11:00:14.243324] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.432 [2024-07-10 11:00:14.243337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.432 [2024-07-10 11:00:14.245401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.690 [2024-07-10 11:00:14.255149] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.690 [2024-07-10 11:00:14.255455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.690 [2024-07-10 11:00:14.255580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.690 [2024-07-10 11:00:14.255606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.690 [2024-07-10 11:00:14.255622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.690 [2024-07-10 11:00:14.255802] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.690 [2024-07-10 11:00:14.255951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.690 [2024-07-10 11:00:14.255971] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.690 [2024-07-10 11:00:14.255985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.690 [2024-07-10 11:00:14.258068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.690 [2024-07-10 11:00:14.267340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.690 [2024-07-10 11:00:14.267689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.690 [2024-07-10 11:00:14.267854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.690 [2024-07-10 11:00:14.267880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.690 [2024-07-10 11:00:14.267896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.690 [2024-07-10 11:00:14.268077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.690 [2024-07-10 11:00:14.268262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.690 [2024-07-10 11:00:14.268283] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.690 [2024-07-10 11:00:14.268297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.690 [2024-07-10 11:00:14.270364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.690 [2024-07-10 11:00:14.279835] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.690 [2024-07-10 11:00:14.280133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.690 [2024-07-10 11:00:14.280319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.690 [2024-07-10 11:00:14.280344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.690 [2024-07-10 11:00:14.280360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.690 [2024-07-10 11:00:14.280500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.690 [2024-07-10 11:00:14.280668] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.690 [2024-07-10 11:00:14.280690] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.690 [2024-07-10 11:00:14.280704] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.690 [2024-07-10 11:00:14.282873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.690 [2024-07-10 11:00:14.291978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.690 [2024-07-10 11:00:14.292303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.690 [2024-07-10 11:00:14.292476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.690 [2024-07-10 11:00:14.292511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.690 [2024-07-10 11:00:14.292528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.690 [2024-07-10 11:00:14.292694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.690 [2024-07-10 11:00:14.292890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.690 [2024-07-10 11:00:14.292911] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.690 [2024-07-10 11:00:14.292925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.690 [2024-07-10 11:00:14.295000] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.690 [2024-07-10 11:00:14.304218] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.690 [2024-07-10 11:00:14.304560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.690 [2024-07-10 11:00:14.304713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.690 [2024-07-10 11:00:14.304738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.690 [2024-07-10 11:00:14.304754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.690 [2024-07-10 11:00:14.304919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.690 [2024-07-10 11:00:14.305068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.690 [2024-07-10 11:00:14.305089] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.690 [2024-07-10 11:00:14.305103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.690 [2024-07-10 11:00:14.307285] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.316665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.317021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.317155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.317180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.317195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.317311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.317501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.317522] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.317537] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.319586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.328882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.329269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.329418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.329454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.329470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.329603] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.329785] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.329805] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.329819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.331759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.341015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.341338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.341472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.341498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.341513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.341645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.341779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.341799] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.341813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.343888] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.353172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.353495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.353638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.353663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.353679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.353811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.353989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.354009] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.354023] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.356189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.365434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.365726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.365968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.365993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.366013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.366146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.366343] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.366363] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.366377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.368506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.377808] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.378089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.378230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.378255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.378270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.378419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.378629] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.378650] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.378665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.380667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.390118] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.390449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.390580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.390605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.390620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.390801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.390949] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.390970] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.390984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.393198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.402390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.402736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.402858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.402883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.402899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.403100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.403247] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.403267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.403281] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.405284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.414650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.414982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.415136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.415160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.415176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.415340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.415531] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.415553] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.415567] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.417576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.426875] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.427177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.427319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.427345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.427361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.427536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.427688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.427709] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.427739] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.429925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.439162] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.439486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.439612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.439638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.439653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.439818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.440004] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.440025] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.440039] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.442088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.451419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.451717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.451871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.451896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.451912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.452076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.452257] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.452277] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.452291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.454457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.463622] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.463952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.464130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.464155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.464171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.464319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.464494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.464516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.464530] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.466647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.475839] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.476170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.476304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.476330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.476346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.476508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.476662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.476688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.476703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.478987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.488146] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.488530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.488699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.488725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.488740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.691 [2024-07-10 11:00:14.488905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.691 [2024-07-10 11:00:14.489119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.691 [2024-07-10 11:00:14.489140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.691 [2024-07-10 11:00:14.489154] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.691 [2024-07-10 11:00:14.491114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.691 [2024-07-10 11:00:14.500386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.691 [2024-07-10 11:00:14.500667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.500829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.691 [2024-07-10 11:00:14.500854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.691 [2024-07-10 11:00:14.500870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.692 [2024-07-10 11:00:14.500986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.692 [2024-07-10 11:00:14.501154] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.692 [2024-07-10 11:00:14.501174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.692 [2024-07-10 11:00:14.501188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.692 [2024-07-10 11:00:14.503349] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.692 [2024-07-10 11:00:14.513157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.692 [2024-07-10 11:00:14.513494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.692 [2024-07-10 11:00:14.513643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.692 [2024-07-10 11:00:14.513671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.692 [2024-07-10 11:00:14.513688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.692 [2024-07-10 11:00:14.513902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.692 [2024-07-10 11:00:14.514073] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.692 [2024-07-10 11:00:14.514095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.692 [2024-07-10 11:00:14.514114] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.950 [2024-07-10 11:00:14.516170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.950 [2024-07-10 11:00:14.525343] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.950 [2024-07-10 11:00:14.525716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-07-10 11:00:14.525845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-07-10 11:00:14.525871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.950 [2024-07-10 11:00:14.525887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.950 [2024-07-10 11:00:14.526036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.950 [2024-07-10 11:00:14.526205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.950 [2024-07-10 11:00:14.526226] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.950 [2024-07-10 11:00:14.526240] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.950 [2024-07-10 11:00:14.528299] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.950 [2024-07-10 11:00:14.537688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.950 [2024-07-10 11:00:14.538031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-07-10 11:00:14.538187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-07-10 11:00:14.538213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.950 [2024-07-10 11:00:14.538228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.950 [2024-07-10 11:00:14.538376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.950 [2024-07-10 11:00:14.538524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.950 [2024-07-10 11:00:14.538546] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.950 [2024-07-10 11:00:14.538560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.950 [2024-07-10 11:00:14.540507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.950 [2024-07-10 11:00:14.550054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.950 [2024-07-10 11:00:14.550392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-07-10 11:00:14.550549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-07-10 11:00:14.550575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.950 [2024-07-10 11:00:14.550591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.950 [2024-07-10 11:00:14.550707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.950 [2024-07-10 11:00:14.550864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.950 [2024-07-10 11:00:14.550885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.950 [2024-07-10 11:00:14.550899] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.950 [2024-07-10 11:00:14.552989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.950 [2024-07-10 11:00:14.562327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.950 [2024-07-10 11:00:14.562675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-07-10 11:00:14.562915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-07-10 11:00:14.562940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.950 [2024-07-10 11:00:14.562956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.950 [2024-07-10 11:00:14.563104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.950 [2024-07-10 11:00:14.563285] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.950 [2024-07-10 11:00:14.563305] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.950 [2024-07-10 11:00:14.563319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.950 [2024-07-10 11:00:14.565238] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.950 [2024-07-10 11:00:14.574606] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.950 [2024-07-10 11:00:14.574975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-07-10 11:00:14.575125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.950 [2024-07-10 11:00:14.575151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.951 [2024-07-10 11:00:14.575166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.951 [2024-07-10 11:00:14.575331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.951 [2024-07-10 11:00:14.575553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.951 [2024-07-10 11:00:14.575575] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.951 [2024-07-10 11:00:14.575590] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.951 [2024-07-10 11:00:14.577768] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.951 [2024-07-10 11:00:14.587077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.951 [2024-07-10 11:00:14.587388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.587523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.587551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.951 [2024-07-10 11:00:14.587567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.951 [2024-07-10 11:00:14.587715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.951 [2024-07-10 11:00:14.587863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.951 [2024-07-10 11:00:14.587884] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.951 [2024-07-10 11:00:14.587897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.951 [2024-07-10 11:00:14.590184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.951 [2024-07-10 11:00:14.599149] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.951 [2024-07-10 11:00:14.599516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.599654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.599678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.951 [2024-07-10 11:00:14.599694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.951 [2024-07-10 11:00:14.599874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.951 [2024-07-10 11:00:14.600022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.951 [2024-07-10 11:00:14.600042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.951 [2024-07-10 11:00:14.600055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.951 [2024-07-10 11:00:14.602265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.951 [2024-07-10 11:00:14.611585] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.951 [2024-07-10 11:00:14.611967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.612104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.612129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.951 [2024-07-10 11:00:14.612144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.951 [2024-07-10 11:00:14.612243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.951 [2024-07-10 11:00:14.612467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.951 [2024-07-10 11:00:14.612489] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.951 [2024-07-10 11:00:14.612503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.951 [2024-07-10 11:00:14.614452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.951 [2024-07-10 11:00:14.623784] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.951 [2024-07-10 11:00:14.624125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.624274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.624299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.951 [2024-07-10 11:00:14.624315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.951 [2024-07-10 11:00:14.624488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.951 [2024-07-10 11:00:14.624609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.951 [2024-07-10 11:00:14.624629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.951 [2024-07-10 11:00:14.624644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.951 [2024-07-10 11:00:14.626716] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.951 [2024-07-10 11:00:14.636076] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.951 [2024-07-10 11:00:14.636437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.636585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.636611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.951 [2024-07-10 11:00:14.636627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.951 [2024-07-10 11:00:14.636743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.951 [2024-07-10 11:00:14.636910] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.951 [2024-07-10 11:00:14.636930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.951 [2024-07-10 11:00:14.636944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.951 [2024-07-10 11:00:14.639137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.951 [2024-07-10 11:00:14.648226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.951 [2024-07-10 11:00:14.648571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.648699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.648726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.951 [2024-07-10 11:00:14.648742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.951 [2024-07-10 11:00:14.648906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.951 [2024-07-10 11:00:14.649085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.951 [2024-07-10 11:00:14.649106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.951 [2024-07-10 11:00:14.649120] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.951 [2024-07-10 11:00:14.651143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.951 [2024-07-10 11:00:14.660455] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.951 [2024-07-10 11:00:14.660806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.660959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.660985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.951 [2024-07-10 11:00:14.661000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.951 [2024-07-10 11:00:14.661116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.951 [2024-07-10 11:00:14.661298] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.951 [2024-07-10 11:00:14.661319] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.951 [2024-07-10 11:00:14.661332] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.951 [2024-07-10 11:00:14.663379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.951 [2024-07-10 11:00:14.672611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.951 [2024-07-10 11:00:14.672963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.673120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.951 [2024-07-10 11:00:14.673146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.952 [2024-07-10 11:00:14.673166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.952 [2024-07-10 11:00:14.673283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.952 [2024-07-10 11:00:14.673491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.952 [2024-07-10 11:00:14.673513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.952 [2024-07-10 11:00:14.673527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.952 [2024-07-10 11:00:14.675605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.952 [2024-07-10 11:00:14.684877] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.952 [2024-07-10 11:00:14.685221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-07-10 11:00:14.685384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-07-10 11:00:14.685409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.952 [2024-07-10 11:00:14.685431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.952 [2024-07-10 11:00:14.685566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.952 [2024-07-10 11:00:14.685702] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.952 [2024-07-10 11:00:14.685722] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.952 [2024-07-10 11:00:14.685751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.952 [2024-07-10 11:00:14.687886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.952 [2024-07-10 11:00:14.697112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.952 [2024-07-10 11:00:14.697453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-07-10 11:00:14.697695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-07-10 11:00:14.697721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.952 [2024-07-10 11:00:14.697736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.952 [2024-07-10 11:00:14.697917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.952 [2024-07-10 11:00:14.698066] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.952 [2024-07-10 11:00:14.698086] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.952 [2024-07-10 11:00:14.698100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.952 [2024-07-10 11:00:14.700208] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.952 [2024-07-10 11:00:14.709234] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.952 [2024-07-10 11:00:14.709544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-07-10 11:00:14.709689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-07-10 11:00:14.709714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.952 [2024-07-10 11:00:14.709730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.952 [2024-07-10 11:00:14.709867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.952 [2024-07-10 11:00:14.710051] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.952 [2024-07-10 11:00:14.710072] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.952 [2024-07-10 11:00:14.710086] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.952 [2024-07-10 11:00:14.712120] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.952 [2024-07-10 11:00:14.721539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.952 11:00:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:57.952 [2024-07-10 11:00:14.721867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 11:00:14 -- common/autotest_common.sh@852 -- # return 0 00:29:57.952 [2024-07-10 11:00:14.722048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-07-10 11:00:14.722073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.952 [2024-07-10 11:00:14.722089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.952 11:00:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:57.952 [2024-07-10 11:00:14.722253] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): B 11:00:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:57.952 ad file descriptor 00:29:57.952 11:00:14 -- common/autotest_common.sh@10 -- # set +x 00:29:57.952 [2024-07-10 11:00:14.722407] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.952 [2024-07-10 11:00:14.722435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.952 [2024-07-10 11:00:14.722451] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.952 [2024-07-10 11:00:14.724402] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.952 [2024-07-10 11:00:14.734039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.952 [2024-07-10 11:00:14.734354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-07-10 11:00:14.734495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-07-10 11:00:14.734522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.952 [2024-07-10 11:00:14.734538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.952 [2024-07-10 11:00:14.734703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.952 [2024-07-10 11:00:14.734868] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.952 [2024-07-10 11:00:14.734889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.952 [2024-07-10 11:00:14.734902] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.952 [2024-07-10 11:00:14.737041] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.952 [2024-07-10 11:00:14.746289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.952 11:00:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.952 [2024-07-10 11:00:14.746632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 11:00:14 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:57.952 [2024-07-10 11:00:14.746795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-07-10 11:00:14.746822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.952 [2024-07-10 11:00:14.746838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.952 [2024-07-10 11:00:14.746970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.952 11:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.952 [2024-07-10 11:00:14.747153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.952 [2024-07-10 11:00:14.747174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.952 [2024-07-10 11:00:14.747190] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.952 11:00:14 -- common/autotest_common.sh@10 -- # set +x 00:29:57.952 [2024-07-10 11:00:14.749391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.952 [2024-07-10 11:00:14.752987] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.952 11:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.952 [2024-07-10 11:00:14.758776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.952 11:00:14 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:57.952 11:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.952 [2024-07-10 11:00:14.759051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 11:00:14 -- common/autotest_common.sh@10 -- # set +x 00:29:57.952 [2024-07-10 11:00:14.759233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.952 [2024-07-10 11:00:14.759258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.952 [2024-07-10 11:00:14.759274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.953 [2024-07-10 11:00:14.759447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.953 [2024-07-10 11:00:14.759599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.953 [2024-07-10 11:00:14.759619] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.953 [2024-07-10 11:00:14.759633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.953 [2024-07-10 11:00:14.761845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.953 [2024-07-10 11:00:14.771282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.953 [2024-07-10 11:00:14.771616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-07-10 11:00:14.771773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.953 [2024-07-10 11:00:14.771798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:57.953 [2024-07-10 11:00:14.771814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:57.953 [2024-07-10 11:00:14.771961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:57.953 [2024-07-10 11:00:14.772134] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.953 [2024-07-10 11:00:14.772160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.953 [2024-07-10 11:00:14.772175] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.210 [2024-07-10 11:00:14.774198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.210 [2024-07-10 11:00:14.783940] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.210 [2024-07-10 11:00:14.784363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-10 11:00:14.784574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-10 11:00:14.784601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:58.210 [2024-07-10 11:00:14.784620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:58.210 [2024-07-10 11:00:14.784768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:58.210 [2024-07-10 11:00:14.784940] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.210 [2024-07-10 11:00:14.784961] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.210 [2024-07-10 11:00:14.784978] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.210 [2024-07-10 11:00:14.786893] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.210 Malloc0 00:29:58.210 11:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:58.210 11:00:14 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.210 11:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.210 11:00:14 -- common/autotest_common.sh@10 -- # set +x 00:29:58.210 [2024-07-10 11:00:14.796322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.210 [2024-07-10 11:00:14.796659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-10 11:00:14.796786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-10 11:00:14.796812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:58.210 [2024-07-10 11:00:14.796830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:58.210 [2024-07-10 11:00:14.796965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:58.210 [2024-07-10 11:00:14.797069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.210 [2024-07-10 11:00:14.797090] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.210 [2024-07-10 11:00:14.797105] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.210 [2024-07-10 11:00:14.799214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.210 11:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:58.210 11:00:14 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.210 11:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.210 11:00:14 -- common/autotest_common.sh@10 -- # set +x 00:29:58.210 [2024-07-10 11:00:14.808712] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.210 [2024-07-10 11:00:14.809064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-10 11:00:14.809223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.210 [2024-07-10 11:00:14.809249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce9c20 with addr=10.0.0.2, port=4420 00:29:58.210 [2024-07-10 11:00:14.809265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9c20 is same with the state(5) to be set 00:29:58.210 [2024-07-10 11:00:14.809365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce9c20 (9): Bad file descriptor 00:29:58.210 [2024-07-10 11:00:14.809518] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.210 [2024-07-10 11:00:14.809540] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.210 [2024-07-10 11:00:14.809554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.210 11:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:58.210 11:00:14 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.210 11:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.210 11:00:14 -- common/autotest_common.sh@10 -- # set +x 00:29:58.210 [2024-07-10 11:00:14.811556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.210 [2024-07-10 11:00:14.813791] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.210 11:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:58.210 11:00:14 -- host/bdevperf.sh@38 -- # wait 3583039 00:29:58.210 [2024-07-10 11:00:14.821026] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.210 [2024-07-10 11:00:15.009627] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:08.249 00:30:08.249 Latency(us) 00:30:08.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.250 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:08.250 Verification LBA range: start 0x0 length 0x4000 00:30:08.250 Nvme1n1 : 15.01 9338.16 36.48 16227.78 0.00 4992.15 749.42 21165.70 00:30:08.250 =================================================================================================================== 00:30:08.250 Total : 9338.16 36.48 16227.78 0.00 4992.15 749.42 21165.70 00:30:08.250 11:00:23 -- host/bdevperf.sh@39 -- # sync 00:30:08.250 11:00:23 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.250 11:00:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:08.250 11:00:23 -- common/autotest_common.sh@10 -- # set +x 00:30:08.250 11:00:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:08.250 11:00:23 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:08.250 11:00:23 -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:08.250 11:00:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:08.250 11:00:23 -- nvmf/common.sh@116 -- # sync 00:30:08.250 11:00:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:08.250 11:00:23 -- nvmf/common.sh@119 -- # set +e 00:30:08.250 11:00:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:08.250 11:00:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:08.250 rmmod nvme_tcp 00:30:08.250 rmmod nvme_fabrics 00:30:08.250 rmmod nvme_keyring 00:30:08.250 11:00:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:08.250 11:00:23 -- nvmf/common.sh@123 -- # set -e 00:30:08.250 11:00:23 -- nvmf/common.sh@124 -- # return 0 00:30:08.250 11:00:23 -- nvmf/common.sh@477 -- # '[' -n 3584046 ']' 00:30:08.250 11:00:23 -- nvmf/common.sh@478 -- # killprocess 3584046 00:30:08.250 11:00:23 -- common/autotest_common.sh@926 -- # '[' -z 3584046 ']' 00:30:08.250 11:00:23 -- common/autotest_common.sh@930 -- # kill -0 3584046 00:30:08.250 11:00:23 -- common/autotest_common.sh@931 -- # uname 00:30:08.250 11:00:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:08.250 11:00:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3584046 00:30:08.250 11:00:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:08.250 11:00:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:08.250 11:00:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3584046' 00:30:08.250 killing process with pid 3584046 00:30:08.250 11:00:23 -- common/autotest_common.sh@945 -- # kill 3584046 00:30:08.250 11:00:23 -- common/autotest_common.sh@950 -- # wait 3584046 00:30:08.250 11:00:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:08.250 11:00:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:08.250 11:00:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:08.250 11:00:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.250 11:00:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:08.250 11:00:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.250 11:00:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.250 11:00:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.185 11:00:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:09.185 00:30:09.185 real 0m22.994s 00:30:09.185 user 1m2.063s 00:30:09.185 sys 0m4.175s 00:30:09.185 11:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:09.185 11:00:25 -- common/autotest_common.sh@10 -- # set +x 00:30:09.185 ************************************ 00:30:09.185 END TEST nvmf_bdevperf 00:30:09.185 ************************************ 00:30:09.185 11:00:25 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:09.185 11:00:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:09.185 11:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:09.185 11:00:25 -- common/autotest_common.sh@10 -- # set +x 00:30:09.185 ************************************ 00:30:09.185 START TEST nvmf_target_disconnect 00:30:09.185 ************************************ 00:30:09.185 11:00:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:09.185 * Looking for test storage... 00:30:09.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.185 11:00:25 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.185 11:00:25 -- nvmf/common.sh@7 -- # uname -s 00:30:09.185 11:00:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.185 11:00:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.185 11:00:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.185 11:00:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.185 11:00:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.185 11:00:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.185 11:00:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.185 11:00:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.185 11:00:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.185 11:00:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.185 11:00:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.185 11:00:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.185 11:00:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.185 11:00:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.185 11:00:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.185 11:00:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.185 11:00:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.185 11:00:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.185 11:00:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.185 11:00:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.185 11:00:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.185 11:00:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.185 11:00:25 -- paths/export.sh@5 -- # export PATH 00:30:09.185 11:00:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.185 11:00:25 -- nvmf/common.sh@46 -- # : 0 00:30:09.185 11:00:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:09.185 11:00:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:09.185 11:00:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:09.185 11:00:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.185 11:00:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.185 11:00:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:09.185 11:00:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:09.185 11:00:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:09.185 11:00:25 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:09.185 11:00:25 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:09.186 11:00:25 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:09.186 11:00:25 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:30:09.186 11:00:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:09.186 11:00:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.186 11:00:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:09.186 11:00:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:09.186 11:00:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:09.186 11:00:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.186 11:00:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:09.186 11:00:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.186 11:00:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:09.186 11:00:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:09.186 11:00:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:09.186 11:00:25 -- common/autotest_common.sh@10 -- # set +x 00:30:11.082 11:00:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:11.082 11:00:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:11.082 11:00:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:11.082 11:00:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:11.082 11:00:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:11.082 11:00:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:11.082 11:00:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:11.082 11:00:27 -- nvmf/common.sh@294 -- # net_devs=() 00:30:11.082 11:00:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:11.082 11:00:27 -- nvmf/common.sh@295 -- # e810=() 00:30:11.082 11:00:27 -- nvmf/common.sh@295 -- # local -ga e810 00:30:11.082 11:00:27 -- nvmf/common.sh@296 -- # x722=() 00:30:11.082 11:00:27 -- nvmf/common.sh@296 -- # local -ga x722 00:30:11.082 11:00:27 -- nvmf/common.sh@297 -- # mlx=() 00:30:11.082 11:00:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:11.082 11:00:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.082 11:00:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.082 11:00:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.082 11:00:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.082 11:00:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.082 11:00:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.082 11:00:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.082 11:00:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.082 11:00:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.082 11:00:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.082 11:00:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.082 11:00:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:11.082 11:00:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:11.082 11:00:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:11.082 11:00:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:11.082 11:00:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:11.082 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:11.082 11:00:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:11.082 11:00:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:11.082 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:11.082 11:00:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:11.082 11:00:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:11.083 11:00:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:11.083 11:00:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:11.083 11:00:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:11.083 11:00:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.083 11:00:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:11.083 11:00:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.083 11:00:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:11.083 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:11.083 11:00:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.083 11:00:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:11.083 11:00:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.083 11:00:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:11.083 11:00:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.083 11:00:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:11.083 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:11.083 11:00:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.083 11:00:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:11.083 11:00:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:11.083 11:00:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:11.083 11:00:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:11.083 11:00:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:11.083 11:00:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.083 11:00:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.083 11:00:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.083 11:00:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:11.083 11:00:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.083 11:00:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.083 11:00:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:11.083 11:00:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.083 11:00:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.083 11:00:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:11.083 11:00:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:11.083 11:00:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.083 11:00:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.341 11:00:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.341 11:00:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.341 11:00:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:11.341 11:00:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.341 11:00:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.341 11:00:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.341 11:00:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:11.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:30:11.341 00:30:11.341 --- 10.0.0.2 ping statistics --- 00:30:11.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.341 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:11.341 11:00:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:30:11.341 00:30:11.341 --- 10.0.0.1 ping statistics --- 00:30:11.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.341 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:30:11.341 11:00:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.341 11:00:27 -- nvmf/common.sh@410 -- # return 0 00:30:11.341 11:00:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:11.341 11:00:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.341 11:00:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:11.341 11:00:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:11.341 11:00:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.341 11:00:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:11.341 11:00:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:11.341 11:00:28 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:11.341 11:00:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:11.341 11:00:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:11.341 11:00:28 -- common/autotest_common.sh@10 -- # set +x 00:30:11.341 ************************************ 00:30:11.341 START TEST nvmf_target_disconnect_tc1 00:30:11.341 ************************************ 00:30:11.341 11:00:28 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:30:11.341 11:00:28 -- host/target_disconnect.sh@32 -- # set +e 00:30:11.341 11:00:28 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:11.341 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.341 [2024-07-10 11:00:28.110678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.341 [2024-07-10 11:00:28.110934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.341 [2024-07-10 11:00:28.110983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa91280 with addr=10.0.0.2, port=4420 00:30:11.341 [2024-07-10 11:00:28.111020] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:11.341 [2024-07-10 11:00:28.111041] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:11.341 [2024-07-10 11:00:28.111056] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:11.341 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:11.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:11.341 Initializing NVMe Controllers 00:30:11.341 11:00:28 -- host/target_disconnect.sh@33 -- # trap - ERR 00:30:11.341 11:00:28 -- host/target_disconnect.sh@33 -- # print_backtrace 00:30:11.342 11:00:28 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:30:11.342 11:00:28 -- common/autotest_common.sh@1132 -- # return 0 00:30:11.342 11:00:28 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:30:11.342 11:00:28 -- host/target_disconnect.sh@41 -- # set -e 00:30:11.342 00:30:11.342 real 0m0.096s 00:30:11.342 user 0m0.041s 00:30:11.342 sys 0m0.055s 00:30:11.342 11:00:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:11.342 11:00:28 -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 ************************************ 00:30:11.342 END TEST nvmf_target_disconnect_tc1 00:30:11.342 ************************************ 00:30:11.342 11:00:28 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:11.342 11:00:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:11.342 11:00:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:11.342 11:00:28 -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 ************************************ 00:30:11.342 START TEST nvmf_target_disconnect_tc2 00:30:11.342 ************************************ 00:30:11.342 11:00:28 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:30:11.342 11:00:28 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:30:11.342 11:00:28 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:11.342 11:00:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:11.342 11:00:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:11.342 11:00:28 -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 11:00:28 -- nvmf/common.sh@469 -- # nvmfpid=3587218 00:30:11.342 11:00:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:11.342 11:00:28 -- nvmf/common.sh@470 -- # waitforlisten 3587218 00:30:11.342 11:00:28 -- common/autotest_common.sh@819 -- # '[' -z 3587218 ']' 00:30:11.342 11:00:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.342 11:00:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:11.342 11:00:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.342 11:00:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:11.342 11:00:28 -- common/autotest_common.sh@10 -- # set +x 00:30:11.606 [2024-07-10 11:00:28.195544] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:30:11.606 [2024-07-10 11:00:28.195621] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.606 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.606 [2024-07-10 11:00:28.261378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:11.606 [2024-07-10 11:00:28.351066] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:11.606 [2024-07-10 11:00:28.351206] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.606 [2024-07-10 11:00:28.351223] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.606 [2024-07-10 11:00:28.351235] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.606 [2024-07-10 11:00:28.351323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:11.606 [2024-07-10 11:00:28.351384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:11.606 [2024-07-10 11:00:28.351452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:11.606 [2024-07-10 11:00:28.351456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:12.539 11:00:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:12.539 11:00:29 -- common/autotest_common.sh@852 -- # return 0 00:30:12.539 11:00:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:12.539 11:00:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:12.539 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:30:12.539 11:00:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.539 11:00:29 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:12.539 11:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.539 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:30:12.539 Malloc0 00:30:12.539 11:00:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.539 11:00:29 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:12.539 11:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.539 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:30:12.539 [2024-07-10 11:00:29.207810] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.539 11:00:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.539 11:00:29 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.539 11:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.539 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:30:12.539 11:00:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.539 11:00:29 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.539 11:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.539 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:30:12.539 11:00:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.539 11:00:29 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.539 11:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.539 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:30:12.539 [2024-07-10 11:00:29.236046] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.539 11:00:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.539 11:00:29 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:12.539 11:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:12.539 11:00:29 -- common/autotest_common.sh@10 -- # set +x 00:30:12.539 11:00:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:12.539 11:00:29 -- host/target_disconnect.sh@50 -- # reconnectpid=3587353 00:30:12.539 11:00:29 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:12.539 11:00:29 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:12.539 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.438 11:00:31 -- host/target_disconnect.sh@53 -- # kill -9 3587218 00:30:14.438 11:00:31 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Write completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.438 starting I/O failed 00:30:14.438 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 [2024-07-10 11:00:31.260211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 [2024-07-10 11:00:31.260547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 [2024-07-10 11:00:31.260858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Read completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 Write completed with error (sct=0, sc=8) 00:30:14.439 starting I/O failed 00:30:14.439 [2024-07-10 11:00:31.261165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.439 [2024-07-10 11:00:31.261375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.439 [2024-07-10 11:00:31.261548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.439 [2024-07-10 11:00:31.261577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.439 qpair failed and we were unable to recover it. 00:30:14.440 [2024-07-10 11:00:31.261732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.440 [2024-07-10 11:00:31.261873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.440 [2024-07-10 11:00:31.261899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.440 qpair failed and we were unable to recover it. 00:30:14.440 [2024-07-10 11:00:31.262051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.440 [2024-07-10 11:00:31.262273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.440 [2024-07-10 11:00:31.262315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.440 qpair failed and we were unable to recover it. 00:30:14.440 [2024-07-10 11:00:31.262510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.440 [2024-07-10 11:00:31.262667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.440 [2024-07-10 11:00:31.262692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.440 qpair failed and we were unable to recover it. 00:30:14.440 [2024-07-10 11:00:31.262855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.263043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.263069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.263226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.263432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.263459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.263601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.263744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.263772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.263916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.264145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.264191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.264363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.264573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.264600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.264724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.264904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.264929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.265100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.265336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.265364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.265535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.265672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.265697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.265889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.266030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.266058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.266232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.266414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.266445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.266609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.266737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.266764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.266924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.267075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.267103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.267276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.267524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.267555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.267678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.267832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.267872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.268032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.268185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.268210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.268437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.268607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.268633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.268856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.269002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.269042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.269226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.269454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.269497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.269659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.269818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.269843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.270022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.270198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.270223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.270399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.270576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.270603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.270736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.270918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.270944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.271232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.271471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.271509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.271642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.271863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.271889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.272094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.272211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.272237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.272393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.272575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.272601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.272753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.272904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.272944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.273138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.273339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.273364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.273523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.273663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.273689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.273854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.274016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.274042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.730 [2024-07-10 11:00:31.274221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.274350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.730 [2024-07-10 11:00:31.274376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.730 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.274524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.274677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.274703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.274863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.275019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.275051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.275202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.275354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.275380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.275580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.275762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.275806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.275994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.276148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.276174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.276293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.276450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.276477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.276605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.276788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.276815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.276971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.277100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.277126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.277256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.277415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.277448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.277629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.277756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.277782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.277958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.278171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.278197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.278379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.278535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.278584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.278791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.278979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.279007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.279198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.279379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.279405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.279617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.279815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.279859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.280071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.280225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.280252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.280372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.280552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.280579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.280722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.280907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.280933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.731 qpair failed and we were unable to recover it. 00:30:14.731 [2024-07-10 11:00:31.281088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.731 [2024-07-10 11:00:31.281277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.281302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.281457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.281610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.281636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.281785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.281933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.281959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.282114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.282269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.282294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.282450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.282571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.282597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.282756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.282912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.282938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.283119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.283275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.283301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.283434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.283619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.283645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.283803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.283982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.284009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.284195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.284375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.284401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.284554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.284699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.284725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.284909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.285087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.285113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.285253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.285402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.285433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.285558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.285738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.285767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.285950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.286109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.286135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.286285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.286435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.286462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.286615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.286762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.286788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.286964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.287136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.287162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.287352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.287532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.287559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.287775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.287921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.287947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.288069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.288192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.288219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.288398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.288562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.288589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.288742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.288924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.288950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.289101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.289220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.289248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-10 11:00:31.289412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.732 [2024-07-10 11:00:31.289587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.289616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.289798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.290029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.290055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.290234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.290384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.290410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.290597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.290756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.290782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.290909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.291089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.291115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.291270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.291402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.291433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.291579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.291784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.291810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.291947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.292101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.292128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.292258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.292441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.292469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.292649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.292849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.292890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.293110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.293267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.293295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.293503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.293713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.293741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.293884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.294065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.294091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.294246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.294364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.294390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.294543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.294693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.294721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.294876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.295027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.295054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.295235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.295387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.295414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.295609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.295771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.295797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.296027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.296202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.296228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.296379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.296519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.296562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.296722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.296893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.296921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.297150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.297333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.297359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.297501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.297690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.297732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.297899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.298073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.298115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.298296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.298458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.733 [2024-07-10 11:00:31.298485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-10 11:00:31.298719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.298888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.298916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.299122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.299279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.299305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.299461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.299590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.299616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.299766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.299917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.299960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.300115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.300295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.300321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.300482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.300601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.300627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.300784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.301000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.301026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.301204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.301389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.301415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.301606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.301745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.301773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.301897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.302079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.302105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.302226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.302383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.302409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.302574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.302718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.302764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.302948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.303101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.303128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.303309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.303494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.303520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.303702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.303893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.303935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.304081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.304258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.304283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.304466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.304643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.304669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.304836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.305028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.305055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.305236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.305418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.305449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.305635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.305789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.305817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.305972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.306149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.306190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.306370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.306510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.306555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.306708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.306940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.306966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.307147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.307298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.307325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.307521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.307743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.307791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.307966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.308151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.308184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.308358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.308552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.308582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.308746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.308959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.308984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.734 [2024-07-10 11:00:31.309165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.309314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.734 [2024-07-10 11:00:31.309339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.734 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.309499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.309656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.309684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.309867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.310039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.310068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.310222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.310403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.310434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.310572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.310727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.310753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.310911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.311068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.311093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.311269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.311420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.311452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.311603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.311756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.311798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.311967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.312141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.312169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.312328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.312459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.312486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.312668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.312862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.312888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.313018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.313171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.313197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.313329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.313480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.313507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.313667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.313850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.313879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.314056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.314209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.314235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.314417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.314579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.314605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.314732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.314916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.314942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.315094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.315272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.315301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.315478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.315636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.315662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.315821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.316007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.316036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.316206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.316385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.316413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.316598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.316754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.316780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.317019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.317167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.317196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.317374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.317524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.317552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.317701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.317846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.317874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.318044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.318222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.318248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.318433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.318586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.318613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.735 [2024-07-10 11:00:31.318773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.318962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.735 [2024-07-10 11:00:31.318992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.735 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.319174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.319328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.319354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.319485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.319636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.319663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.319784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.319967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.319992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.320170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.320346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.320371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.320523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.320672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.320698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.320852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.321049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.321078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.321281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.321461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.321488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.321672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.321816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.321841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.322031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.322202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.322232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.322387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.322545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.322572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.322742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.322919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.322960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.323133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.323277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.323304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.323459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.323615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.323642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.323818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.324008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.324037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.324211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.324409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.324443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.324593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.324741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.324766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.324937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.325118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.325144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.325332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.325460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.325504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.325651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.325820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.325848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.326061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.326240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.326270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.326421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.326571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.326614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.326748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.326917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.326943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.327136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.327285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.327312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.327442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.327603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.327629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.327757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.327920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.327946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.328077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.328257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.328283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.328445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.328570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.328597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.328784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.328901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.328927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.329100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.329268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.329298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.329479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.329629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.329659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.329840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.329998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.330025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.330215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.330371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.330396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.330568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.330722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.330747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.330873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.331022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.331048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.331182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.331354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.331380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.331544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.331694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.331720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.331877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.331996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.332021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.736 qpair failed and we were unable to recover it. 00:30:14.736 [2024-07-10 11:00:31.332186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.332387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.736 [2024-07-10 11:00:31.332415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.332606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.332769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.332795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.332969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.333151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.333182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.333336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.333563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.333591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.333777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.333930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.333957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.334122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.334268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.334296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.334495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.334623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.334648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.334794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.334946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.334972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.335106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.335291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.335317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.335476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.335704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.335730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.335883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.336075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.336103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.336273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.336453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.336478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.336611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.336760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.336790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.336949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.337106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.337131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.337327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.337462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.337491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.337667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.337833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.337858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.338043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.338198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.338235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.338399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.338587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.338615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.338785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.338983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.339009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.339172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.339324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.339350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.339519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.339669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.339695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.339883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.340033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.340059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.340188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.340385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.340441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.340603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.340764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.340790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.340946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.341100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.341126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.341259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.341440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.341468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.341671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.341844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.341874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.342058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.342209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.342234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.342429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.342554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.342579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.342725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.342879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.342905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.343056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.343246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.343288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.343462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.343609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.343635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.343824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.343993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.344022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.344197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.344364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.344393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.344582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.344738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.344765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.344917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.345063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.345089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.345218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.345395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.345437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.345618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.345775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.345801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.345978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.346159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.346187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.346383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.346562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.346588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.346740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.346910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.346938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.347067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.347251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.737 [2024-07-10 11:00:31.347277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.737 qpair failed and we were unable to recover it. 00:30:14.737 [2024-07-10 11:00:31.347455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.347634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.347660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.347827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.348023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.348052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.348235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.348395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.348446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.348590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.348764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.348803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.348957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.349111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.349137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.349281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.349443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.349486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.349636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.349793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.349819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.350001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.350157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.350197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.350392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.350576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.350602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.350761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.350911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.350936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.351069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.351257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.351301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.351447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.351587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.351614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.351748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.351890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.351920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.352092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.352245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.352271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.352456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.352608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.352633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.352813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.353005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.353030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.353213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.353351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.353379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.353596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.353722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.353747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.353892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.354033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.354059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.354237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.354385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.354411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.354538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.354719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.354745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.354909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.355088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.738 [2024-07-10 11:00:31.355114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.738 qpair failed and we were unable to recover it. 00:30:14.738 [2024-07-10 11:00:31.355245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.355369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.355394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.355565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.355699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.355735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.355955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.356135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.356176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.356355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.356511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.356538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.356689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.356856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.356881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.357005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.357181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.357206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.357389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.357524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.357550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.357702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.357836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.357862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.357985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.358136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.358162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.358323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.358488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.358515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.358654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.358804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.358832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.358991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.359146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.359172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.359330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.359510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.359536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.359690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.359911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.359936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.360091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.360221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.360248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.360401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.360554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.360581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.360764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.360917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.360943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.361129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.361258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.361285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.361490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.361649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.361675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.361850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.362020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.362053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.362222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.362432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.362458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.739 [2024-07-10 11:00:31.362616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.362744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.739 [2024-07-10 11:00:31.362770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.739 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.362899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.363047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.363073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.363253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.363439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.363465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.363641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.363792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.363819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.364026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.364274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.364339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.364504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.364631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.364656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.364820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.364971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.364997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.365198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.365323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.365352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.365533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.365720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.365750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.365930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.366072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.366100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.366266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.366463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.366489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.366647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.366827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.366852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.366980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.367134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.367160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.367342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.367521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.367547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.367699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.367895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.367921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.368076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.368245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.368273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.368410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.368598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.368624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.368745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.368905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.368934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.369112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.369269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.369299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.369439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.369589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.369615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.369787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.369916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.369944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.370147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.370280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.370305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.370457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.370638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.740 [2024-07-10 11:00:31.370664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.740 qpair failed and we were unable to recover it. 00:30:14.740 [2024-07-10 11:00:31.370825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.371005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.371046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.371248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.371401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.371435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.371665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.371827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.371853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.372029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.372199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.372227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.372377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.372554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.372580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.372771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.372940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.372966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.373170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.373378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.373404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.373595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.373729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.373754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.373884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.374002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.374028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.374148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.374340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.374369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.374597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.374770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.374796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.374969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.375137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.375163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.375359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.375532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.375562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.375773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.375929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.375955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.376109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.376296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.376322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.376505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.376685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.376721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.376875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.377038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.377081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.377259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.377422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.377453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.377603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.377784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.377809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.377987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.378205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.378233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.378400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.378537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.741 [2024-07-10 11:00:31.378565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.741 qpair failed and we were unable to recover it. 00:30:14.741 [2024-07-10 11:00:31.378728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.378866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.378894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.379071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.379183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.379208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.379392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.379534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.379560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.379718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.379868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.379909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.380087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.380237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.380262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.380415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.380557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.380583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.380759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.380873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.380899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.381057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.381225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.381255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.381447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.381625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.381653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.381787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.381948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.381976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.382149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.382326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.382352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.382522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.382678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.382706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.382869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.383051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.383076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.383208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.383387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.383413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.383597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.383764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.383792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.383967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.384120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.384150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.384321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.384497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.384524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.742 qpair failed and we were unable to recover it. 00:30:14.742 [2024-07-10 11:00:31.384650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.742 [2024-07-10 11:00:31.384827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.384853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.385057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.385227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.385255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.385422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.385551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.385578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.385733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.385909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.385950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.386112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.386290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.386315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.386470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.386598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.386624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.386769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.386925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.386950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.387135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.387289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.387316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.387514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.387641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.387666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.387816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.387929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.387954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.388131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.388327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.388355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.388525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.388654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.388679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.388810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.388942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.388967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.389086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.389244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.389270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.389446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.389585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.389610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.389735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.389893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.389919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.390058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.390229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.390257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.390389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.390570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.390596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.390778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.390932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.390957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.391080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.391233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.391258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.391410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.391571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.391598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.391759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.391936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.391962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.743 qpair failed and we were unable to recover it. 00:30:14.743 [2024-07-10 11:00:31.392118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.743 [2024-07-10 11:00:31.392243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.392270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.392449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.392624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.392654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.392823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.393041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.393098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.393307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.393443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.393470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.393637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.393792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.393844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.394026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.394219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.394251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.394420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.394591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.394619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.394817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.395077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.395105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.395252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.395432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.395476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.395619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.395790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.395815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.395967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.396145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.396172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.396337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.396532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.396561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.396738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.396862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.396887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.397065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.397195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.397223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.397394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.397565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.397593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.397750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.397888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.397916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.398116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.398288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.398316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.398494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.398651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.398677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.398793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.398971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.398997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.399139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.399310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.399338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.399489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.399640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.399666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.399820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.399985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.400013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.400173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.400308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.400337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.400505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.400650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.744 [2024-07-10 11:00:31.400680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.744 qpair failed and we were unable to recover it. 00:30:14.744 [2024-07-10 11:00:31.400863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.401014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.401040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.401194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.401366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.401395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.401583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.401753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.401782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.402028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.402180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.402226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.402406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.402582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.402610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.402784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.402964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.403006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.403215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.403369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.403395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.403558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.403702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.403728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.404018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.404363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.404414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.404639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.404803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.404829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.405001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.405160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.405189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.405385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.405585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.405614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.405820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.405977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.406002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.406154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.406274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.406299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.406462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.406649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.406675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.406852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.407140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.407168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.407376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.407535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.407561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.407745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.407902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.407944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.408141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.408334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.408362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.408534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.408675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.408703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.408875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.409022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.409047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.409180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.409385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.409411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.409570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.409703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.409729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.409853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.410038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.410063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.410222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.410387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.410415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.410593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.410802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.410862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.411030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.411203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.411229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.411379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.411576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.411605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.411786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.411940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.411993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.412190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.412334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.412362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.412512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.412683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.412713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.412859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.413135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.413160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.413310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.413479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.413508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.413673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.413838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.413863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.414021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.414234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.414260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.745 [2024-07-10 11:00:31.414405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.414567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.745 [2024-07-10 11:00:31.414609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.745 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.414807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.414985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.415028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.415176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.415345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.415373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.415542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.415703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.415730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.416077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.416284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.416312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.416514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.416666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.416692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.416911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.417035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.417060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.417284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.417466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.417495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.417639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.417830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.417858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.418023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.418156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.418181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.418362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.418537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.418563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.418696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.418818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.418843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.419165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.419361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.419387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.419574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.419727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.419753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.419883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.420088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.420114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.420286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.420448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.420490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.420638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.420819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.420845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.421021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.421205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.421230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.421411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.421578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.421607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.421740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.421920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.421949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.422113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.422317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.422345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.422518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.422697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.422723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.422917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.423084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.423113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.423304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.423463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.423490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.423667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.423943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.423969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.424123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.424275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.424301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.424438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.424621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.424647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.424812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.424985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.425013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.425181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.425352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.425380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.425561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.425715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.425761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.425942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.426097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.426122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.426301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.426476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.426520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.426696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.426874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.426900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.427034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.427158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.427185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.427386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.427596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.427622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.427824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.427993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.428022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.746 [2024-07-10 11:00:31.428268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.428438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.746 [2024-07-10 11:00:31.428483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.746 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.428643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.428845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.428910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.429116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.429285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.429313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.429481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.429621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.429649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.429792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.430040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.430093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.430292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.430508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.430536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.430709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.430874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.430902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.431095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.431262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.431291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.431475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.431629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.431656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.431835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.431982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.432008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.432161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.432307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.432333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.432527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.432713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.432755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.432922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.433117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.433155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.433332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.433525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.433554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.433723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.433895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.433923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.434083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.434251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.434279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.434407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.434562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.434591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.434745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.434921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.434947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.435088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.435221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.435250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.435460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.435614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.435640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.435831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.435993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.436021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.436199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.436383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.436433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.436571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.436702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.436730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.436927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.437129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.437155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.437312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.437465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.437492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.437648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.437802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.437829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.437986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.438135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.438161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.438284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.438434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.438460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.438583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.438700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.438726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.438873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.438994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.439020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.439200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.439348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.439373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.439566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.439761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.439787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.439937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.440110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.440136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.440304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.440447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.440473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.440632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.440805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.440899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.441076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.441253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.441278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.441486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.441657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.441685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.441822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.441978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.442020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.442193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.442397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.442422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.747 qpair failed and we were unable to recover it. 00:30:14.747 [2024-07-10 11:00:31.442579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.747 [2024-07-10 11:00:31.442709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.442738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.442879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.443073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.443101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.443279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.443445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.443471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.443623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.443777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.443818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.444020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.444149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.444175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.444320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.444493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.444526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.444729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.444937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.444967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.445171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.445376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.445405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.445611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.445740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.445765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.445975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.446175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.446204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.446872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.447486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.447519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.447731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.447859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.447885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.448056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.448192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.448222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.448390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.448576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.448607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.448805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.448935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.448960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.449143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.449328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.449358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.449527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.449693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.449719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.449907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.450099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.450125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.450280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.450408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.450441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.450627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.450755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.450781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.450956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.451134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.451178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.451377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.451587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.451613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.451792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.451965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.451990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.452141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.452329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.452370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.452569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.452722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.452748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.452915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.453116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.453142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.453297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.453461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.453488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.453643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.453829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.453871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.454052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.454192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.454222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.454394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.454563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.454592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.454746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.454899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.454940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.455095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.455222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.455249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.455418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.455558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.748 [2024-07-10 11:00:31.455584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.748 qpair failed and we were unable to recover it. 00:30:14.748 [2024-07-10 11:00:31.455712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.455859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.455891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.456109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.456271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.456297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.456452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.456634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.456662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.456886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.457042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.457067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.457204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.457357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.457383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.457557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.457712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.457763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.457931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.458112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.458137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.458292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.458482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.458508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.458689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.458892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.458920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.459065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.459239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.459281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.459460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.459627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.459655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.459819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.460024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.460049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.460183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.460348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.460373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.460545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.460682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.460708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.460858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.461015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.461043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.461213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.461358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.461388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.461543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.461685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.461713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.461864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.462017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.462042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.462245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.462407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.462442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.462586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.462720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.462750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.462900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.463055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.463084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.463233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.463423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.463483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.463653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.463797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.463826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.463995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.464215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.464265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.464404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.464599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.464625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.464747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.464873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.464898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.465035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.465201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.465229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.465410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.465541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.465567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.465715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.466003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.466031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.466202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.466394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.466439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.466586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.466757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.466785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.466982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.467160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.467206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.467399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.467543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.467571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.467721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.467843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.467869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.468102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.468247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.468273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.468451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.468606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.468633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.749 qpair failed and we were unable to recover it. 00:30:14.749 [2024-07-10 11:00:31.468792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.749 [2024-07-10 11:00:31.468947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.468990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.469164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.469299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.469324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.469488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.469620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.469645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.469797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.469954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.469979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.470163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.470365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.470394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.470558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.470687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.470713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.470844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.470978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.471007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.471180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.471317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.471345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.471540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.471659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.471685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.471823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.471977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.472002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.472153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.472347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.472373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.472512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.472675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.472704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.472867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.473032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.473060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.473267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.473390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.473415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.473626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.473792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.473834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.474002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.474133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.474161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.474361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.474519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.474561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.474741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.474909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.474937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.475112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.475293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.475320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.475500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.475642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.475670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.475808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.475997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.476025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.476197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.476309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.476334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.476512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.476704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.476731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.476865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.477095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.477121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.477274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.477436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.477462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.477596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.477741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.477782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.477920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.478101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.478127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.478284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.478450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.478493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.478619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.478781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.478806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.479008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.479157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.479187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.479330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.479470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.479500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.479674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.479868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.479896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.480038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.480227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.480255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.480404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.480576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.480618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.480823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.480977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.481002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.481120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.481271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.481296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.481422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.481620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.481645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.481808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.481937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.481962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.482121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.482294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.482327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.750 qpair failed and we were unable to recover it. 00:30:14.750 [2024-07-10 11:00:31.482465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.750 [2024-07-10 11:00:31.482608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.482639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.482808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.482976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.483004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.483182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.483335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.483377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.483521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.483662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.483690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.483843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.484007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.484035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.484202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.484332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.484361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.484543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.484671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.484696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.484853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.485063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.485092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.485255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.485444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.485470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.485595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.485744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.485769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.485895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.486048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.486074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.486217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.486392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.486434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.486557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.486718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.486745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.486867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bbdc0 is same with the state(5) to be set 00:30:14.751 [2024-07-10 11:00:31.487083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.487243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.487290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.487454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.487585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.487611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.487753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.487925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.487969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.488126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.488275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.488302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.488482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.488658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.488705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.488887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.489058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.489086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.489234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.489365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.489392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.489582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.489754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.489797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.489942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.490176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.490220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.490343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.490512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.490557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.490740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.490904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.490933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.491082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.491203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.491228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.491360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.491524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.491569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.491723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.491886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.491912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.492067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.492220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.492246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.492402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.492548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.492575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.492748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.492929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.492959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.493114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.493264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.493291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.493413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.493611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.493657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.493833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.494041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.494068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.494186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.494302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.494327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.494538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.494697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.494730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.494888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.495066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.495091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.495277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.495449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.495475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.495630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.495825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.495853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.751 qpair failed and we were unable to recover it. 00:30:14.751 [2024-07-10 11:00:31.496028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.751 [2024-07-10 11:00:31.496203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.496229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.496387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.496580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.496629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.496779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.496971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.497016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.497177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.497326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.497352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.497517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.497717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.497762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.497926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.498124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.498168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.498324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.498511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.498555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.498700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.498925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.498970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.499136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.499299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.499324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.499510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.499676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.499707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.499906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.500077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.500104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.500255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.500389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.500434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.500594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.500759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.500802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.501007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.501184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.501210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.501363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.501535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.501580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.501726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.501894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.501937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.502118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.502270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.502296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.502467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.502637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.502681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.502870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.503042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.503068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.503232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.503378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.503404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.503576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.503759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.503804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.503980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.504130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.504162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.504322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.504516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.504562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.504719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.504884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.504929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.505059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.505192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.505219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.505369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.505561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.505606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.505763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.505949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.505991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.506111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.506268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.752 [2024-07-10 11:00:31.506294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.752 qpair failed and we were unable to recover it. 00:30:14.752 [2024-07-10 11:00:31.506470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.506681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.506710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.506856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.506989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.507017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.507169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.507299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.507324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.507507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.507689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.507736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.507919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.508067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.508093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.508227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.508378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.508404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.508560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.508727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.508773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.508923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.509095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.509121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.509275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.509434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.509461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.509608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.509796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.509839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.510021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.510187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.510213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.510391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.510552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.510597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.510748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.510912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.510955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.511126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.511311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.511337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.511494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.511672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.511701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.511900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.512094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.512123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.512291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.512486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.512515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.512682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.512844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.512888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.513063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.513218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.513244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.513364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.513517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.513562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.513711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.513938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.513981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.514135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.514283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.514309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.514437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.514596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.514640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.514823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.515067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.515112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.515280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.515436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.515463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.515611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.515802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.515845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.515991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.516132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.516157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.516282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.516410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.516440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.516594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.516777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.516809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.516986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.517132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.517163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.517301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.517466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.517494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.517615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.517770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.517797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.517936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.518115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.518144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.518323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.518483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.518510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.518643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.518838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.518876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.519051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.519209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.519239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.519413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.519579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.519606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.519755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.519936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.519979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.753 qpair failed and we were unable to recover it. 00:30:14.753 [2024-07-10 11:00:31.520185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.520374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.753 [2024-07-10 11:00:31.520400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.520541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.520675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.520700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.520925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.521110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.521138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.521349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.521511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.521537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.521690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.521905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.521934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.522150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.522290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.522319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.522493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.522622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.522648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.522794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.522964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.522993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.523188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.523335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.523365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.523538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.523666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.523693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.523881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.524047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.524077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.524242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.524434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.524461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.524595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.524736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.524762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.524899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.525071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.525099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.525243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.525394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.525423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.525579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.525703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.525729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.525956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.526134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.526163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.526345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.526498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.526525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.526673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.526813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.526854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.527028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.527195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.527224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.527417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.527594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.527620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.527800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.527965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.527994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.528194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.528362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.528391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.528566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.528724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.528750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.528928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.529098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.754 [2024-07-10 11:00:31.529126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:14.754 qpair failed and we were unable to recover it. 00:30:14.754 [2024-07-10 11:00:31.529320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.529556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.529583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.529715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.529842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.529868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.530061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.530232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.530261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.530407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.530554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.530581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.530709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.530845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.530874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.531038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.531210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.531238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.531382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.531533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.531560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.531678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.531827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.531856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.532052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.532235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.532264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.532400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.532599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.532626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.532755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.532879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.532906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.533101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.533284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.533331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.533490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.533625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.533653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.533801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.533973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.534017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.534183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.534378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.028 [2024-07-10 11:00:31.534405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.028 qpair failed and we were unable to recover it. 00:30:15.028 [2024-07-10 11:00:31.534532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.534731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.534759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.534957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.535146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.535172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.535329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.535513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.535557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.535710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.535913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.535959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.536085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.536217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.536242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.536376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.536527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.536572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.536708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.536875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.536929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.537071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.537200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.537227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.537382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.537543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.537588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.537740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.537909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.537953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.538135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.538259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.538284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.538409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.538557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.538601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.538775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.538944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.538987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.539144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.539296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.539321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.539501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.539701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.539728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.539899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.540045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.540070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.540224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.540369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.540394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.540550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.540739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.540782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.540961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.541131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.541156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.541282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.541493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.541522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.541697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.541875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.541918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.542073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.542228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.542253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.542381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.542542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.542586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.542740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.542956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.542998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.543123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.543246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.543271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.543395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.543596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.543622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.029 qpair failed and we were unable to recover it. 00:30:15.029 [2024-07-10 11:00:31.543783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.543970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.029 [2024-07-10 11:00:31.543995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.544178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.544336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.544361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.544512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.544706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.544750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.544917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.545141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.545185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.545340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.545467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.545493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.545620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.545746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.545773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.545930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.546061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.546085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.546244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.546401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.546437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.546623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.546786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.546814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.547032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.547201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.547226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.547383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.547534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.547583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.547734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.547930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.547972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.548144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.548280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.548304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.548479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.548673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.548715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.548871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.549064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.549106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.549260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.549387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.549414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.549580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.549751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.549793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.549941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.550099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.550124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.550316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.550469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.550497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.550666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.550862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.550904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.551031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.551174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.551214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.551372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.551545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.551572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.551750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.551921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.551951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.552106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.552239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.552264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.552395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.552540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.552584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.552744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.552903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.552947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.553082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.553241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.030 [2024-07-10 11:00:31.553268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.030 qpair failed and we were unable to recover it. 00:30:15.030 [2024-07-10 11:00:31.553449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.553608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.553650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.553836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.554023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.554066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.554185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.554338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.554363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.554532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.554696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.554748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.554910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.555079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.555106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.555286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.555438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.555464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.555614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.555794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.555821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.556004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.556150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.556175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.556299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.556431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.556456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.556601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.556758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.556786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.556958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.557112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.557137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.557291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.557418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.557473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.557648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.557843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.557870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.558029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.558208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.558238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.558364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.558524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.558568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.558720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.558885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.558928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.559108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.559285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.559310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.559440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.559611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.559654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.559807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.560041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.560066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.560235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.560381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.560406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.560566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.560734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.560777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.560951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.561117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.561158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.561308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.561500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.561528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.561718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.561908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.561950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.562106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.562273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.562298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.562454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.562626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.562669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.031 [2024-07-10 11:00:31.562835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.563010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.031 [2024-07-10 11:00:31.563038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.031 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.563181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.563341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.563365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.563542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.563709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.563754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.563906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.564085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.564113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.564266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.564414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.564445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.564611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.564811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.564853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.565007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.565175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.565218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.565373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.565540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.565582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.565762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.565929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.565972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.566131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.566284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.566309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.566440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.566582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.566624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.566801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.566968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.566995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.567154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.567305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.567330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.567451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.567614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.567655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.567824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.568009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.568052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.568183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.568338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.568363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.568565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.568754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.568796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.568964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.569135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.569159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.569323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.569465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.569491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.569697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.569891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.569942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.570114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.570285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.570310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.570524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.570715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.570759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.570938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.571118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.571142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.571309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.571444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.571470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.571614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.571805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.571849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.572031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.572176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.572200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.572349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.572520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.032 [2024-07-10 11:00:31.572548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.032 qpair failed and we were unable to recover it. 00:30:15.032 [2024-07-10 11:00:31.572733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.572894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.572942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.573110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.573231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.573255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.573388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.573550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.573593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.573738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.573933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.573976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.574096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.574245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.574270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.574392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.574521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.574547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.574666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.574811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.574854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.574978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.575097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.575122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.575275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.575402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.575432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.575611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.575776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.575820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.576028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.576180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.576207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.576365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.576538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.576582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.576735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.576896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.576938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.577124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.577282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.577307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.577508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.577660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.577685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.577825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.577954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.577979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.578159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.578306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.578331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.578459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.578577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.578603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.578727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.578913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.578939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.579058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.579207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.579232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.579356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.579524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.579550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.033 qpair failed and we were unable to recover it. 00:30:15.033 [2024-07-10 11:00:31.579733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.579885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.033 [2024-07-10 11:00:31.579911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.580070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.580229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.580254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.580381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.580529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.580572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.580722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.580894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.580937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.581090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.581211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.581236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.581359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.581519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.581562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.581706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.581894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.581936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.582092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.582214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.582239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.582393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.582551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.582594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.582775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.582937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.582980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.583197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.583415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.583454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.583612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.583783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.583816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.583987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.584154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.584182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.584354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.584513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.584541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.584712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.584852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.584881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.585084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.585251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.585280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.585459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.585612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.585638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.585788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.585989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.586017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.586191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.586332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.586360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.586506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.586658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.586683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.586883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.587071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.587102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.587278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.587444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.587487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.587641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.587820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.587847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.588008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.588180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.588225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.588397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.588562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.588590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.588725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.588862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.588904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.589091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.589275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.589303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.034 qpair failed and we were unable to recover it. 00:30:15.034 [2024-07-10 11:00:31.589445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.034 [2024-07-10 11:00:31.589583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.589608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.589725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.589856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.589881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.590022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.590151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.590178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.590345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.590512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.590538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.590693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.590816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.590856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.590991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.591127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.591156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.591290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.591467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.591493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.591648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.591794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.591818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.591984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.592171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.592212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.592348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.592507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.592532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.592727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.592926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.592970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.593111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.593247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.593275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.593455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.593575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.593600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.593720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.593848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.593878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.594045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.594187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.594214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.594359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.594522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.594548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.594669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.594846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.594874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.595038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.595217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.595244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.595385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.595534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.595559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.595725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.595921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.595965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.596120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.596336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.596363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.596517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.596670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.596699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.596839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.597008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.597035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.597205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.597347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.597372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.597525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.597648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.597673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.597834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.597986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.598014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.598160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.598295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.598322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.035 qpair failed and we were unable to recover it. 00:30:15.035 [2024-07-10 11:00:31.598514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.035 [2024-07-10 11:00:31.598638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.598663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.598799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.598912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.598936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.599139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.599303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.599330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.599513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.599671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.599696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.599887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.600048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.600091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.600228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.600395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.600422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.600614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.600760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.600787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.600939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.601094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.601136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.601293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.601418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.601458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.601590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.601729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.601758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.601939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.602058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.602100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.602277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.602392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.602417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.602581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.602762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.602786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.602944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.603070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.603096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.603220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.603395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.603423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.603612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.603735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.603762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.603911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.604085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.604113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.604286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.604452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.604480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.604619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.604791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.604818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.605035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.605206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.605233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.605439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.605604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.605632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.605781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.605907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.605935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.606127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.606255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.606298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.606471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.606638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.606665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.606840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.606973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.607002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.607195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.607370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.607398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.607554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.607682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.036 [2024-07-10 11:00:31.607706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.036 qpair failed and we were unable to recover it. 00:30:15.036 [2024-07-10 11:00:31.607893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.608017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.608042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.608162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.608306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.608334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.608504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.608626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.608651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.608791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.608918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.608942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.609070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.609243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.609267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.609396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.609595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.609622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.609750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.609896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.609921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.610035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.610187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.610211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.610355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.610529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.610555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.610708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.610858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.610898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.611053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.611178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.611224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.611362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.611523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.611548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.611716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.611923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.611967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.612115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.612245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.612270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.612403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.612538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.612563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.612713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.612835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.612860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.612993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.613141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.613182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.613346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.613503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.613532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.613673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.613806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.613834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.613985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.614110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.614135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.614290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.614428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.614457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.614639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.614804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.614834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.615007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.615174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.615201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.615369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.615542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.615568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.615697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.615850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.615877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.616047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.616202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.616244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.616381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.616523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.037 [2024-07-10 11:00:31.616552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.037 qpair failed and we were unable to recover it. 00:30:15.037 [2024-07-10 11:00:31.616684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.616849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.616876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.617052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.617231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.617259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.617437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.617578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.617605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.617738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.617890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.617917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.618076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.618216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.618240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.618363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.618541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.618567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.618695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.618914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.618939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.619114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.619277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.619304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.619439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.619606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.619631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.619766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.619948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.619973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.620102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.620223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.620249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.620381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.620534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.620560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.620710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.620866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.620894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.621055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.621180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.621205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.621399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.621570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.621612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.621776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.621978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.622002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.622126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.622269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.622296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.622504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.622636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.622661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.622852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.622973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.622998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.623170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.623336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.623363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.623550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.623675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.623699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.623849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.624006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.624034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.038 [2024-07-10 11:00:31.624206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.624350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.038 [2024-07-10 11:00:31.624390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.038 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.624562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.624690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.624715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.624873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.625057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.625081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.625204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.625325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.625351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.625525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.625723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.625748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.625872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.626045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.626070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.626190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.626314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.626340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.626490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.626680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.626705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.626850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.627021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.627048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.627225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.627351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.627376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.627551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.627679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.627705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.627878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.628055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.628081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.628259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.628415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.628448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.628655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.628795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.628822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.628970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.629121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.629145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.629272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.629396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.629421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.629623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.629787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.629814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.629982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.630152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.630176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.630323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.630475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.630518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.630684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.630875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.630903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.631044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.631209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.631237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.631408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.631537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.631579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.631756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.631954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.631982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.632108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.632286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.632313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.632459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.632591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.632615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.632765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.632924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.632951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.633116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.633255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.633282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.039 qpair failed and we were unable to recover it. 00:30:15.039 [2024-07-10 11:00:31.633440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.633596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.039 [2024-07-10 11:00:31.633621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.633771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.633909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.633936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.634096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.634254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.634281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.634475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.634611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.634638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.634811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.634930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.634955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.635096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.635258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.635286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.635450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.635566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.635591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.635743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.635890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.635917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.636107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.636272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.636301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.636473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.636591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.636616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.636757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.636950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.636978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.637141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.637276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.637305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.637478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.637590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.637615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.637799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.637948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.637973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.638125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.638269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.638296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.638467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.638642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.638667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.638847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.639013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.639041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.639172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.639375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.639403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.639558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.639735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.639760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.639936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.640116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.640141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.640309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.640507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.640535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.640675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.640830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.640855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.640977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.641122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.641146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.641290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.641467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.641493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.040 qpair failed and we were unable to recover it. 00:30:15.040 [2024-07-10 11:00:31.641623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.040 [2024-07-10 11:00:31.641823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.641850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.641992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.642166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.642193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.642411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.642562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.642604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.642777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.642927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.642967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.643116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.643278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.643305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.643475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.643651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.643676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.643820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.643976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.644017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.644166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.644339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.644364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.644567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.644711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.644735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.644933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.645089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.645113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.645268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.645410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.645445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.645645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.645840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.645885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.646032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.646186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.646210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.646406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.646563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.646588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.646718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.646906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.646930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.647054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.647211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.647237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.647401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.647567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.647595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.647768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.647927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.647955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.648128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.648306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.648331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.648488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.648662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.648690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.648853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.649020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.649047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.649217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.649334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.649359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.649556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.649712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.649742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.649920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.650083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.650112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.650272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.650399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.650430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.650579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.650731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.650770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.041 [2024-07-10 11:00:31.650931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.651135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.041 [2024-07-10 11:00:31.651163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.041 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.651320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.651495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.651520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.651664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.651860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.651889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.652080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.652242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.652270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.652472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.652641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.652669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.652860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.653024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.653052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.653242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.653392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.653417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.653581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.653708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.653733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.653920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.654104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.654129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.654283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.654492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.654517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.654670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.654822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.654847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.655045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.655258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.655286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.655444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.655578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.655605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.655780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.655928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.655953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.656152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.656317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.656345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.656488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.656683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.656710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.656920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.657046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.657071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.657231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.657423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.657472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.657598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.657773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.657798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.657928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.658084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.658110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.658280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.658459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.658484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.658635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.658802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.658829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.659029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.659191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.659218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.659348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.659528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.659553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.659681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.659825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.042 [2024-07-10 11:00:31.659850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.042 qpair failed and we were unable to recover it. 00:30:15.042 [2024-07-10 11:00:31.660002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.660124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.660149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.660275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.660403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.660433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.660569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.660759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.660783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.660939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.661107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.661135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.661277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.661412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.661447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.661589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.661750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.661778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.661933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.662059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.662086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.662286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.662448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.662477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.662629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.662773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.662798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.662974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.663146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.663175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.663338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.663552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.663601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.663745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.663919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.663946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.664085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.664243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.664268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.664434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.664586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.664610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.664763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.664943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.664968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.665120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.665266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.665308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.665509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.665693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.665736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.665901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.666072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.666099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.666270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.666447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.666488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.666631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.666798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.666826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.667020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.667196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.667222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.667436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.667606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.667632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.043 qpair failed and we were unable to recover it. 00:30:15.043 [2024-07-10 11:00:31.667804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.043 [2024-07-10 11:00:31.667945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.667978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.668173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.668310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.668338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.668544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.668670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.668713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.668908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.669119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.669163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.669327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.669495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.669520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.669652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.669804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.669829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.670026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.670181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.670206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.670332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.670506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.670532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.670668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.670791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.670816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.670999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.671184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.671225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.671408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.671532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.671562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.671719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.671835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.671860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.672008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.672194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.672221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.672391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.672558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.672586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.672741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.672920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.672962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.673130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.673290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.673317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.673463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.673632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.673658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.673805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.673967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.674009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.674156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.674306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.674330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.674497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.674645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.674673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.674842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.674994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.675037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.675213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.675377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.675405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.675595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.675816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.675841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.044 [2024-07-10 11:00:31.675997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.676147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.044 [2024-07-10 11:00:31.676172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.044 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.676363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.676512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.676537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.676715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.676894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.676919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.677105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.677305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.677332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.677512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.677670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.677694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.677900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.678076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.678101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.678252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.678422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.678456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.678625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.678793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.678818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.679009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.679176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.679203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.679382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.679548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.679573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.679724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.679894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.679919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.680073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.680220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.680247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.680409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.680567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.680607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.680771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.680943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.680970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.681101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.681272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.681299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.681476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.681623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.681666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.681868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.682020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.682044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.682169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.682319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.682344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.682563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.682678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.682719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.682897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.683070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.683095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.683266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.683465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.683490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.683638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.683786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.683827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.683952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.684120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.684147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.684312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.684453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.684482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.684628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.684743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.684767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.684907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.685097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.685125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.685287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.685456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.685486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.045 qpair failed and we were unable to recover it. 00:30:15.045 [2024-07-10 11:00:31.685644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.685794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-10 11:00:31.685819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.686017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.686172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.686200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.686359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.686514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.686543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.686720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.686866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.686906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.687076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.687242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.687270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.687437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.687603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.687630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.687805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.687959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.687985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.688141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.688269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.688296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.688462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.688608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.688637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.688817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.688968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.688993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.689143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.689293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.689318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.689506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.689674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.689706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.689843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.689968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.689993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.690141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.690299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.690323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.690498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.690648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.690676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.690836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.690985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.691010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.691131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.691335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.691363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.691520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.691672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.691696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.691809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.691938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.691962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.692168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.692321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.692362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.692540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.692689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.692714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.692846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.693021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.693045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.693232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.693435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.693460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.693588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.693742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.693770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.693938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.694118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-10 11:00:31.694160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.046 qpair failed and we were unable to recover it. 00:30:15.046 [2024-07-10 11:00:31.694367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.694562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.694590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.694762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.694983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.695034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.695198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.695376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.695419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.695573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.695696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.695723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.695893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.696060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.696111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.696333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.696454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.696497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.696673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.696828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.696853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.697024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.697147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.697175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.697345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.697537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.697565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.697705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.697872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.697899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.698066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.698198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.698226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.698374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.698491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.698517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.698722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.698884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.698911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.699078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.699239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.699267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.699416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.699637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.699665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.699811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.699960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.699985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.700155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.700329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.700353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.700537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.700713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.700741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.700916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.701089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.701130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.701339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.701464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.701490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.701677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.701804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.701845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.701987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.702176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.702203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.702372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.702533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.702559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.702684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.702860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.702885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.703068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.703260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.703288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.703434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.703563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.703587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.703717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.703842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-10 11:00:31.703866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.047 qpair failed and we were unable to recover it. 00:30:15.047 [2024-07-10 11:00:31.704043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.704206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.704230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.704443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.704585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.704612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.704778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.704898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.704922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.705050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.705225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.705250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.705453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.705639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.705664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.705812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.706009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.706045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.706211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.706403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.706438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.706571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.706765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.706792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.706995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.707115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.707140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.707297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.707464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.707490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.707667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.707808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.707840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.707987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.708140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.708165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.708331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.708473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.708500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.708634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.708810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.708838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.708993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.709150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.709175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.709341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.709498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.709527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.709727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.709853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.709878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.710056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.710249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.710277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.710417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.710615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.710643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.710807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.710959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.710985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.711108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.711285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.711331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.048 qpair failed and we were unable to recover it. 00:30:15.048 [2024-07-10 11:00:31.711540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.711694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.048 [2024-07-10 11:00:31.711719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.711920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.712083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.712112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.712305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.712435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.712477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.712641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.712777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.712804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.712964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.713143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.713168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.713345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.713508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.713536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.713676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.713836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.713864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.714026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.714202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.714227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.714405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.714578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.714606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.714793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.714942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.714966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.715152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.715315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.715342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.715516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.715664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.715689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.715873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.716038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.716067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.716262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.716416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.716449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.716622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.716775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.716817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.717020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.717166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.717226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.717399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.717589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.717615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.717770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.717937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.717964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.718140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.718294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.718318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.718447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.718603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.718627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.718788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.718941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.718982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.719125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.719294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.719321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.719462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.719628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.719655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.719823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.719976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.720016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.720175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.720314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.720341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.720516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.720661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.720688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.720864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.721014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.721039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.049 qpair failed and we were unable to recover it. 00:30:15.049 [2024-07-10 11:00:31.721189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.721314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.049 [2024-07-10 11:00:31.721338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.721524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.721676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.721700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.721857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.722020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.722044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.722229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.722381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.722406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.722590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.722735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.722762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.722962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.723088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.723112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.723289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.723481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.723506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.723638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.723780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.723809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.723957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.724109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.724134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.724306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.724494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.724519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.724673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.724867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.724892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.725068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.725222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.725247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.725389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.725522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.725548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.725704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.725840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.725872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.726048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.726202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.726227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.726391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.726595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.726623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.726786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.726919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.726948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.727123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.727275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.727300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.727485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.727650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.727678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.727870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.728040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.728065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.728219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.728391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.728419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.728629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.728860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.728910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.729113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.729240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.729265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.729443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.729570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.729595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.729717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.729871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.729896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.730056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.730189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.730213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.730388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.730563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.050 [2024-07-10 11:00:31.730588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.050 qpair failed and we were unable to recover it. 00:30:15.050 [2024-07-10 11:00:31.730785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.730978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.731006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.731195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.731322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.731350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.731501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.731661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.731686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.731887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.732031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.732055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.732207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.732374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.732402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.732581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.732732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.732774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.732941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.733180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.733228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.733409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.733554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.733579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.733772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.733932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.733960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.734090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.734221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.734251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.734418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.734563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.734591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.734759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.734912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.734954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.735125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.735312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.735336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.735485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.735654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.735682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.735843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.735988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.736030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.736198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.736338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.736365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.736516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.736674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.736701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.736874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.737035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.737062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.737235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.737408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.737454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.737622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.737785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.737814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.738006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.738149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.738177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.738316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.738507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.738535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.738665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.738855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.738883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.739013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.739158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.739183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.739361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.739530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.739560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.739735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.739888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.739913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.740110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.740305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.051 [2024-07-10 11:00:31.740332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.051 qpair failed and we were unable to recover it. 00:30:15.051 [2024-07-10 11:00:31.740510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.740687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.740714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.740888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.741067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.741091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.741273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.741389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.741449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.741590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.741747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.741775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.741908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.742062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.742089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.742233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.742380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.742405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.742572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.742768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.742795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.742943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.743115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.743142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.743294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.743448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.743474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.743625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.743781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.743821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.744014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.744165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.744193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.744369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.744578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.744604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.744783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.744959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.744983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.745114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.745292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.745332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.745506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.745624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.745649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.745828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.746007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.746034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.746183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.746389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.746413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.746572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.746725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.746750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.746934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.747064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.747091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.747220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.747357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.747384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.747568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.747781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.747848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.748054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.748204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.748229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.748406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.748578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.748606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.748750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.748898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.748924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.749096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.749275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.749300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.749513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.749705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.749733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.749912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.750058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.052 [2024-07-10 11:00:31.750083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.052 qpair failed and we were unable to recover it. 00:30:15.052 [2024-07-10 11:00:31.750253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.750414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.750448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.750644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.750788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.750827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.750996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.751122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.751147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.751303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.751495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.751523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.751684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.751879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.751907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.752079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.752203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.752228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.752373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.752540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.752567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.752714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.752871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.752896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.753047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.753175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.753214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.753365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.753558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.753583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.753739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.753891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.753915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.754069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.754250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.754275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.754415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.754617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.754645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.754805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.754942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.754970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.755142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.755293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.755333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.755497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.755667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.755694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.755861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.756008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.756035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.756216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.756366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.756393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.756551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.756669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.756693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.756867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.757010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.757036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.757231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.757400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.757439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.757601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.757772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.757798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.758006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.758161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.053 [2024-07-10 11:00:31.758203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.053 qpair failed and we were unable to recover it. 00:30:15.053 [2024-07-10 11:00:31.758353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.758558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.758586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.758784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.759013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.759064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.759258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.759451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.759479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.759624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.759802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.759844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.760010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.760175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.760202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.760363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.760543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.760567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.760722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.760927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.760955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.761121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.761286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.761313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.761490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.761636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.761664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.761808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.761976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.762001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.762184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.762350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.762377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.762554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.762713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.762738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.762869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.763024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.763065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.763219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.763377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.763402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.763537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.763696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.763720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.763905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.764033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.764057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.764178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.764346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.764374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.764563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.764692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.764716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.764868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.764992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.765017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.765165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.765365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.765389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.765551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.765698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.765739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.765900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.766044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.766072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.766263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.766386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.766412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.766551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.766676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.766701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.766878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.767026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.767053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.767191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.767356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.767383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.054 [2024-07-10 11:00:31.767523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.767665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.054 [2024-07-10 11:00:31.767694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.054 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.767871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.767989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.768030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.768177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.768310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.768334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.768463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.768638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.768666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.768818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.768935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.768959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.769105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.769288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.769312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.769448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.769609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.769636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.769814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.769963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.770006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.770176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.770334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.770361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.770541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.770668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.770692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.770848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.770972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.771011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.771142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.771278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.771307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.771520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.771668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.771710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.771861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.772012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.772037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.772189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.772316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.772344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.772506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.772653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.772678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.772844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.772969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.773011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.773144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.773280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.773307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.773468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.773605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.773633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.773781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.773907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.773931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.774080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.774247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.774274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.774434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.774558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.774582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.774708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.774829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.774853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.775043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.775197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.775221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.775372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.775534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.775562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.775705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.775856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.775881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.776054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.776203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.776227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.776380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.776562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.055 [2024-07-10 11:00:31.776590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.055 qpair failed and we were unable to recover it. 00:30:15.055 [2024-07-10 11:00:31.776760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.776886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.776926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.777066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.777245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.777269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.777395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.777534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.777559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.777688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.777812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.777836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.777988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.778117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.778144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.778276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.778441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.778484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.778615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.778770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.778794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.778919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.779039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.779063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.779239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.779371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.779398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.779546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.779676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.779700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.779824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.779984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.780008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.780130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.780254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.780278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.780396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.780550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.780575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.780719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.780881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.780905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.781032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.781179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.781203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.781363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.781497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.781522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.781698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.781873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.781900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.782036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.782213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.782237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.782423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.782578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.782607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.782741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.782946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.782973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.783142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.783283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.783312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.783521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.783645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.783671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.783829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.784014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.784038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.784197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.784323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.056 [2024-07-10 11:00:31.784347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.056 qpair failed and we were unable to recover it. 00:30:15.056 [2024-07-10 11:00:31.784475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.784603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.784628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.784832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.785029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.785056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.785197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.785334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.785361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.785518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.785636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.785660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.785817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.785971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.785998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.786168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.786345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.786371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.786524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.786684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.786709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.786850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.787012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.787039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.787198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.787326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.787350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.787506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.787634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.787659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.787808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.787961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.787988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.788128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.788268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.788295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.788466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.788616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.788641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.788792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.788995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.789023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.789214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.789338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.789365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.789523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.789653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.789679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.789813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.790027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.790054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.790222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.790373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.790398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.790521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.790640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.790665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.790846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.791003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.791030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.791215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.791361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.791388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.791540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.791715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.791742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.791910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.792077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.792101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.792269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.792409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.792446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.792589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.792738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.792763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.792899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.793080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.793108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.793306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.793497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.057 [2024-07-10 11:00:31.793526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.057 qpair failed and we were unable to recover it. 00:30:15.057 [2024-07-10 11:00:31.793678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.793828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.793852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.794041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.794169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.794196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.794369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.794503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.794546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.794701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.794857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.794882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.795009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.795164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.795191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.795371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.795504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.795529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.795663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.795782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.795809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.795940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.796126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.796150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.796298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.796458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.796486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.796662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.796833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.796860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.797032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.797153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.797178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.797331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.797486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.797512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.797664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.797790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.797814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.797940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.798055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.798079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.798195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.798316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.798341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.798503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.798628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.798670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.798861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.798996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.799023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.799162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.799291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.799318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.799491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.799686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.799718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.799861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.799993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.058 [2024-07-10 11:00:31.800020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.058 qpair failed and we were unable to recover it. 00:30:15.058 [2024-07-10 11:00:31.800185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.800349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.800376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.800550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.800694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.800737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.800936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.801102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.801131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.801317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.801443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.801469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.801597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.801748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.801789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.801972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.802104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.802131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.802334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.802477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.802506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.802683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.802837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.802883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.803023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.803168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.803200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.803367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.803516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.803542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.803692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.803822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.803847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.803981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.804135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.804165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.804335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.804473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.804501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.804646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.804797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.804822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.805002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.805138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.805165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.805319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.805454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.805482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.805647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.805764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.805789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.805980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.806120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.806149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.806311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.806454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.806482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.806634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.806758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.806783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.806921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.807087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.807114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.807292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.807447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.807473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.807660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.807817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.807857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.807988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.808162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.808187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.808322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.808466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.808493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.808647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.808797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.808823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.808975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.809120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.059 [2024-07-10 11:00:31.809149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.059 qpair failed and we were unable to recover it. 00:30:15.059 [2024-07-10 11:00:31.809286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.809447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.809476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.809623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.809740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.809764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.809944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.810137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.810162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.810288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.810441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.810467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.810628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.810749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.810774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.810943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.811102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.811129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.811303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.811436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.811461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.811611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.811737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.811762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.811913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.812076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.812103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.812280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.812434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.812459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.812626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.812750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.812774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.812965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.813089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.813113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.813258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.813405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.813440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.813595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.813724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.813749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.813929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.814131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.814158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.814320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.814484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.814512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.814659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.814781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.814806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.814989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.815150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.815177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.815315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.815495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.815520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.815646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.815799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.815823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.815990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.816178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.816205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.816374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.816535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.816559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.816715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.816914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.816943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.817083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.817249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.817277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.060 qpair failed and we were unable to recover it. 00:30:15.060 [2024-07-10 11:00:31.817448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.060 [2024-07-10 11:00:31.817619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.817648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.817817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.817971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.817996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.818185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.818349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.818378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.818595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.818740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.818770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.818920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.819047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.819073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.819269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.819422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.819454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.819639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.819831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.819857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.819978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.820128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.820155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.820339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.820490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.820520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.820696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.820877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.820928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.821103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.821250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.821276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.821460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.821619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.821645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.821842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.821973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.821998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.822181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.822322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.822352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.822538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.822694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.822720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.822867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.823019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.823046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.823229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.823402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.823437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.823615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.823766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.823794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.823937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.824105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.824133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.824319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.824510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.824537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.824720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.824889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.824917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.825084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.825245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.825274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.825413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.825588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.825632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.825766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.825905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.825933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.826126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.826288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.826316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.826511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.826686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.826714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.826929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.827077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.827103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.061 qpair failed and we were unable to recover it. 00:30:15.061 [2024-07-10 11:00:31.827319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.061 [2024-07-10 11:00:31.827472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.827498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.827687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.827806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.827834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.828023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.828192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.828221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.828366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.828501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.828531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.828701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.828827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.828853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.829049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.829229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.829271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.829474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.829673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.829701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.829911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.830192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.830246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.830412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.830593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.830622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.830769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.830969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.830995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.831146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.831297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.831323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.831511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.831673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.831702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.831896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.832067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.832097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.832301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.832454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.832494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.832692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.832842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.832872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.833020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.833167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.833198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.833377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.833580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.833610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.833789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.834021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.834073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.834251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.834412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.834476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.834699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.834896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.834954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.835150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.835308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.835337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.835563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.835726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.835756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.835932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.836089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.836144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.062 [2024-07-10 11:00:31.836312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.836495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.062 [2024-07-10 11:00:31.836526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.062 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.836713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.836852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.836878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.837032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.837197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.837224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.837382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.837584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.837620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.837826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.837992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.838034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.838201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.838370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.838409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.838649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.838841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.838882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.839081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.839285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.839322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.839536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.839686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.839738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.839942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.840097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.840130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.840280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.840433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.840486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.840637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.840801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.840829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.841021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.841220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.841249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.841411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.841605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.841632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.841815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.841940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.841967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.842146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.842341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.842370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.842505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.842710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.842745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.842922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.843099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.843125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.843266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.843439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.843479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.843653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.843833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.843875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.844025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.844142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.844168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.844346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.844531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.844559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.844686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.844833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.844859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.844985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.845193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.845222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.845406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.845570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.845597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.336 qpair failed and we were unable to recover it. 00:30:15.336 [2024-07-10 11:00:31.845721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.845903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.336 [2024-07-10 11:00:31.845946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.846095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.846272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.846314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.846480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.846662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.846690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.846849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.847040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.847067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.847185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.847365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.847391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.847581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.847726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.847769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.847913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.848098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.848128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.848306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.848470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.848500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.848642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.848810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.848839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.849005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.849194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.849222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.849423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.849621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.849650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.849819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.850033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.850086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.850293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.850481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.850543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.850763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.850937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.850966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.851099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.851310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.851339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.851518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.851691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.851720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.851896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.852053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.852079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.852282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.852512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.852568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.852779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.852960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.852986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.853162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.853337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.853365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.853535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.853683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.853725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.853865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.854029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.854057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.854233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.854380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.854406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.854576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.854744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.854773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.854970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.855170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.855196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.855373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.855532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.855575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.855713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.855878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.855907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.337 qpair failed and we were unable to recover it. 00:30:15.337 [2024-07-10 11:00:31.856084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.337 [2024-07-10 11:00:31.856234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.856261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.856391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.856551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.856579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.856723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.856983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.857035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.857204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.857397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.857441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.857625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.857773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.857816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.857997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.858121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.858151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.858331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.858530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.858560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.858729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.858975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.859046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.859209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.859378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.859413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.859609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.859739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.859766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.859915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.860070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.860097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.860243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.860420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.860453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.860588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.860785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.860811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.860970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.861124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.861150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.861326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.861520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.861548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.861706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.861893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.861919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.862099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.862297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.862326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.862492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.862662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.862691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.862856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.863021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.863049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.863232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.863382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.863433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.863567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.863748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.863775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.863928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.864111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.864137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.864267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.864447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.864474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.864673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.864906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.864935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.865133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.865296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.865325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.865480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.865627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.865669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.865877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.866058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.338 [2024-07-10 11:00:31.866101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.338 qpair failed and we were unable to recover it. 00:30:15.338 [2024-07-10 11:00:31.866292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.866441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.866470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.866643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.866797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.866841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.867016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.867158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.867187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.867356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.867557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.867587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.867761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.867910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.867937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.868116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.868249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.868279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.868414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.868602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.868629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.868754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.868881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.868907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.869034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.869177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.869206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.869371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.869541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.869572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.869724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.869875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.869916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.870109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.870242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.870271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.870486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.870670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.870697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.870888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.871013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.871041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.871253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.871420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.871457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.871629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.871775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.871805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.871955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.872105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.872148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.872325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.872472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.872498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.872652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.872807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.872836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.873016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.873167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.873211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.873389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.873549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.873577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.873790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.873963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.873990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.874136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.874308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.874337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.874533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.874716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.874742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.874902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.875080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.875109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.875288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.875487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.875517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.339 qpair failed and we were unable to recover it. 00:30:15.339 [2024-07-10 11:00:31.875707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.339 [2024-07-10 11:00:31.875885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.875929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.876094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.876285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.876312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.876538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.876697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.876723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.876879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.877094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.877124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.877287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.877486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.877516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.877687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.877838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.877880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.878023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.878225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.878256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.878392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.878568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.878595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.878729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.878857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.878885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.879085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.879234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.879260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.879443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.879640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.879697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.879839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.879989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.880015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.880201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.880398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.880430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.880603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.880798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.880853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.880991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.881145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.881171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.881322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.881519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.881549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.881721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.881863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.881897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.882067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.882214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.882256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.882415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.882621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.882648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.882855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.883049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.883078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.883244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.883365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.883391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.883527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.883678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.883705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.883829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.884030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.884059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.884241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.884366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.884393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.340 qpair failed and we were unable to recover it. 00:30:15.340 [2024-07-10 11:00:31.884551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.884676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.340 [2024-07-10 11:00:31.884702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.884876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.885085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.885137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.885307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.885483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.885510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.885675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.885847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.885876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.886051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.886195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.886224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.886400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.886543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.886585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.886782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.887089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.887147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.887341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.887508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.887537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.887715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.887939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.887991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.888183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.888354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.888383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.888544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.888721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.888764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.888985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.889147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.889174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.889349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.889557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.889602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.889821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.890023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.890053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.890260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.890422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.890466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.890656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.890839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.890868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.891040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.891207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.891236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.891408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.891548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.891592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.891738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.891927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.891990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.892147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.892336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.892364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.892545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.892716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.892744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.892924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.893075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.893117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.893289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.893447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.893475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.893623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.893776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.893820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.893947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.894115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.341 [2024-07-10 11:00:31.894146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.341 qpair failed and we were unable to recover it. 00:30:15.341 [2024-07-10 11:00:31.894314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.894480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.894510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.894710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.894854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.894884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.895055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.895219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.895248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.895410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.895569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.895596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.895747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.895868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.895895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.896028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.896151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.896178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.896382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.896538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.896564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.896716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.896892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.896921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.897088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.897301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.897343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.897492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.897628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.897655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.897785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.897936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.897979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.898145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.898307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.898336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.898516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.898643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.898670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.898853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.899139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.899190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.899369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.899556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.899583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.899764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.899916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.899958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.900148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.900318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.900347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.900526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.900705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.900731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.900883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.901068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.901098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.901231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.901434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.901464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.901634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.901838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.901891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.902067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.902233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.902263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.902464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.902623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.902650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.902852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.903010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.903037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.903165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.903373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.903400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.903568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.903747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.903773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.903942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.904079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.342 [2024-07-10 11:00:31.904110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.342 qpair failed and we were unable to recover it. 00:30:15.342 [2024-07-10 11:00:31.904282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.904433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.904460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.904625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.904743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.904770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.904957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.905161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.905188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.905343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.905525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.905555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.905733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.905855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.905881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.906028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.906258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.906287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.906494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.906626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.906653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.906831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.907002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.907033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.907192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.907358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.907387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.907565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.907704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.907733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.907932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.908104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.908132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.908275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.908447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.908478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.908664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.908791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.908818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.909000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.909173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.909202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.909335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.909499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.909529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.909724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.909905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.909931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.910112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.910270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.910296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.910476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.910685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.910714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.910880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.911036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.911066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.911233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.911399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.911449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.911589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.911756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.911785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.343 [2024-07-10 11:00:31.911915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.912107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.343 [2024-07-10 11:00:31.912136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.343 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.912317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.912469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.912496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.912659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.912875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.912904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.913066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.913205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.913234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.913444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.913673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.913735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.913930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.914098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.914127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.914298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.914490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.914551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.914754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.914907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.914933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.915084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.915287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.915314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.915451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.915633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.915662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.915843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.915993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.916019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.916199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.916373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.916402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.916599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.916756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.916800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.917010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.917208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.917237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.917411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.917577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.917607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.917810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.917949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.917979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.918156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.918271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.918297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.918484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.918642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.918685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.918858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.919026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.919057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.919233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.919411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.919448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.919621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.919769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.919795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.919913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.920037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.920068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.920225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.920422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.920458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.920624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.920794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.920820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.920975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.921168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.921194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.921352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.921506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.921533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.344 [2024-07-10 11:00:31.921690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.921849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.344 [2024-07-10 11:00:31.921892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.344 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.922090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.922236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.922262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.922442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.922618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.922648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.922776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.922955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.922981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.923136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.923331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.923361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.923520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.923679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.923705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.923898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.924067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.924093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.924287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.924488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.924518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.924696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.924899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.924928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.925130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.925325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.925354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.925547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.925720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.925750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.925928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.926102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.926173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.926372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.926562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.926593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.926792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.927011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.927037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.927191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.927358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.927387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.927569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.927718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.927761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.927978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.928124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.928151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.928308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.928430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.928472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.928643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.928797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.928824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.929008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.929278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.929330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.929534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.929697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.929726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.929933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.930052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.930079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.930222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.930394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.930434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.930589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.930745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.930786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.345 [2024-07-10 11:00:31.930955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.931123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.345 [2024-07-10 11:00:31.931150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.345 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.931270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.931408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.931441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.931606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.931737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.931781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.931952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.932120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.932149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.932292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.932487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.932518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.932694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.932815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.932844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.933019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.933217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.933246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.933418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.933562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.933592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.933772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.933946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.933972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.934125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.934319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.934346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.934475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.934623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.934650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.934839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.934992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.935037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.935202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.935400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.935438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.935635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.935805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.935834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.936032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.936203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.936232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.936421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.936551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.936579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.936737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.936918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.936947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.937101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.937250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.937276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.937477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.937662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.937689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.937817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.937968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.346 [2024-07-10 11:00:31.937994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.346 qpair failed and we were unable to recover it. 00:30:15.346 [2024-07-10 11:00:31.938152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.938306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.938334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.938546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.938712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.938756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.938934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.939141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.939175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.939325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.939503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.939531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.939649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.939829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.939872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.940047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.940224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.940251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.940408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.940530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.940557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.940719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.940887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.940917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.941094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.941275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.941301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.941452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.941575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.941602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.941755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.941976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.942002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.942155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.942312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.942340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.942524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.942671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.942706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.942850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.943011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.943041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.943235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.943366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.943392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.943564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.943716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.943742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.943926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.944166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.944218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.944414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.944595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.944621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.944778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.944976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.945006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.945148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.945312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.945341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.945537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.945714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.945740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.945893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.946155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.946206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.946382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.946553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.946583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.946781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.946951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.946980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.947159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.947363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.947393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.347 [2024-07-10 11:00:31.947597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.947759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.347 [2024-07-10 11:00:31.947788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.347 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.947920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.948116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.948190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.948366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.948490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.948519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.948708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.948996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.949051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.949262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.949459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.949489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.949642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.949793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.949820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.949999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.950166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.950195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.950372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.950525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.950552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.950683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.950833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.950874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.951006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.951152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.951178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.951358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.951519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.951546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.951723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.951871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.951898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.952072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.952267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.952297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.952452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.952620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.952662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.952864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.953034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.953063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.953224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.953367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.953395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.953577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.953719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.953748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.953939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.954135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.954211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.954406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.954564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.954594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.954759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.954971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.955027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.955231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.955438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.955468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.955669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.955968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.956022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.956227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.956437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.956464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.956621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.956768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.956856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.957062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.957251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.957281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.957464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.957590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.957617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.957800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.958000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.958029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.348 [2024-07-10 11:00:31.958197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.958371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.348 [2024-07-10 11:00:31.958400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.348 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.958575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.958764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.958791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.958975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.959150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.959179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.959347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.959553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.959583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.959745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.959914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.959943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.960156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.960329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.960357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.960517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.960661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.960690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.960851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.960986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.961015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.961182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.961337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.961380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.961536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.961745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.961774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.961920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.962112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.962140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.962316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.962474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.962522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.962722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.962977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.963035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.963228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.963404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.963436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.963598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.963755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.963781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.963968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.964150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.964176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.964333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.964511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.964539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.964670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.964790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.964816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.964998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.965228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.965285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.965457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.965595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.965625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.965774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.965927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.965954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.966104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.966237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.966266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.966442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.966638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.966667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.966819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.966983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.967009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.967215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.967373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.967399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.967569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.967747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.967813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.967990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.968134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.968161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.349 [2024-07-10 11:00:31.968344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.968483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.349 [2024-07-10 11:00:31.968514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.349 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.968683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.968819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.968848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.969050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.969211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.969237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.969418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.969604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.969633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.969826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.970006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.970033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.970228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.970381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.970408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.970594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.970801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.970831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.971035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.971188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.971230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.971430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.971590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.971616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.971794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.971940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.971966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.972161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.972329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.972359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.972569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.972756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.972786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.972955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.973153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.973214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.973386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.973564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.973595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.973774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.973947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.973977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.974174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.974346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.974373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.974527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.974678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.974704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.974851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.975006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.975052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.975221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.975396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.975423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.975616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.975791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.975818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.975971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.976120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.976147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.976300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.976420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.976455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.976594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.976772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.976801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.976966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.977097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.977125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.977334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.977511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.977542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.977680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.977868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.977895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.978075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.978258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.978284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.350 [2024-07-10 11:00:31.978437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.978657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.350 [2024-07-10 11:00:31.978683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.350 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.978861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.979033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.979062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.979241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.979441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.979471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.979676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.979831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.979859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.980037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.980231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.980261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.980445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.980625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.980651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.980807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.981017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.981047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.981214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.981335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.981363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.981516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.981665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.981698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.981905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.982202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.982261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.982455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.982629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.982658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.982803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.982978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.983020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.983226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.983389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.983417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.983580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.983709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.983738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.983885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.984034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.984060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.984258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.984414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.984447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.984628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.984805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.984834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.985008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.985159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.985185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.985333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.985543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.985571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.985733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.985883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.985910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.986065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.986223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.986252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.986448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.986602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.986645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.986846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.987031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.987058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.351 qpair failed and we were unable to recover it. 00:30:15.351 [2024-07-10 11:00:31.987209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.351 [2024-07-10 11:00:31.987407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.987443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.987619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.987770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.987797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.987977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.988221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.988275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.988492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.988616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.988643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.988818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.988977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.989006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.989176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.989336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.989365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.989566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.989699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.989726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.989852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.989972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.989998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.990180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.990357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.990383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.990514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.990661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.990687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.990869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.991024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.991053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.991248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.991439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.991469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.991670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.991791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.991833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.991993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.992159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.992188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.992352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.992544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.992574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.992774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.992974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.993010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.993177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.993361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.993391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.993587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.993775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.993817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.993989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.994156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.994199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.994345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.994538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.994568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.994747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.994910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.994939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.995112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.995272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.995298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.995480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.995684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.995713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.995844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.996041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.996067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.996251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.996433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.996463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.996632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.996922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.996975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.997174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.997349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.997378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.352 qpair failed and we were unable to recover it. 00:30:15.352 [2024-07-10 11:00:31.997570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.352 [2024-07-10 11:00:31.997736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:31.997762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:31.997937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:31.998113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:31.998142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:31.998313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:31.998508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:31.998537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:31.998716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:31.998888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:31.998917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:31.999113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:31.999328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:31.999355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:31.999526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:31.999676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:31.999705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:31.999919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.000051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.000077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.000256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.000406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.000443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.000615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.000777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.000820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.000984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.001114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.001146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.001336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.001505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.001535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.001712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.001884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.001913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.002083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.002274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.002303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.002499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.002687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.002727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.002932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.003086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.003114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.003292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.003462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.003495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.003632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.003783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.003813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.003996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.004154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.004180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.004327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.004510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.004539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.004679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.004881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.004908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.005071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.005194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.005220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.353 [2024-07-10 11:00:32.005401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.005533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.353 [2024-07-10 11:00:32.005559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.353 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.005708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.005920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.005946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.006093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.006245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.006274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.006443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.006624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.006665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.006875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.007128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.007181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.007349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.007514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.007543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.007711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.007884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.007910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.008130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.008307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.008334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.008473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.008662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.008701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.008877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.009005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.009033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.009215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.009390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.009416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.009615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.009833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.009859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.010017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.010140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.010167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.010345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.010535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.010565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.010702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.010842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.010871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.011051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.011222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.011251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.011448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.011588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.011614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.011764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.011935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.011979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.012148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.012313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.012339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.012618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.012823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.012857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.013035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.013227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.013255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.013383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.013545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.013572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.013695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.013836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.013863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.014020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.014172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.014199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.014347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.014541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.014568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.014730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.015010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.015039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.015213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.015383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.015410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.354 [2024-07-10 11:00:32.015586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.015736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.354 [2024-07-10 11:00:32.015763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.354 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.015998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.016240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.016267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.016449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.016609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.016634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.016758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.016956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.016984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.017176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.017376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.017406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.017589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.017718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.017765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.017937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.018122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.018148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.018298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.018456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.018493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.018647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.018823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.018852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.018998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.019153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.019179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.019390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.019549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.019575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.019748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.019909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.019938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.020084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.020221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.020247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.020430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.020605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.020631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.020763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.020894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.020920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.021047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.021181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.021208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.021359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.021523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.021550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.021674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.021832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.021861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.022039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.022212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.022240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.022381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.022548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.022575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.022696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.022904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.022933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.023113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.023263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.023289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.355 qpair failed and we were unable to recover it. 00:30:15.355 [2024-07-10 11:00:32.023412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.023551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.355 [2024-07-10 11:00:32.023583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.023757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.023934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.024006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.024176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.024359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.024390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.024549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.024670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.024697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.024880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.025057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.025088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.025225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.025375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.025417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.025594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.025767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.025796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.025948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.026113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.026141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.026295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.026434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.026460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.026613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.026787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.026818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.026956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.027119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.027148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.027299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.027453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.027482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.027632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.027827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.027856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.027998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.028143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.028174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.028372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.028502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.028529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.028692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.028850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.028879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.029077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.029256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.029285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.029474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.029600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.029626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.029799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.029962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.029991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.356 [2024-07-10 11:00:32.030137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.030266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.356 [2024-07-10 11:00:32.030295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.356 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.030473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.030629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.030655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.030827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.031003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.031032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.031168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.031337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.031366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.031516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.031694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.031726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.031853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.031997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.032024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.032183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.032330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.032373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.032575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.032699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.032742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.032872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.033034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.033063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.033222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.033403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.033447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.033625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.033755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.033781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.033966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.034121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.034149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.034292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.034448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.034482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.034618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.034846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.034900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.035040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.035194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.035220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.035351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.035483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.035510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.035648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.035808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.035836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.035978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.036168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.036219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.036356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.036513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.036541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.036697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.036829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.036856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.037008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.037178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.037205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.037367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.037557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.037602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.037802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.037971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.038001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.038170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.038316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.038342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.038498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.038661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.038698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.038861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.039041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.039072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.357 qpair failed and we were unable to recover it. 00:30:15.357 [2024-07-10 11:00:32.039209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.357 [2024-07-10 11:00:32.039363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.039390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.039575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.039765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.039810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.040019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.040176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.040202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.040336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.040470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.040500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.040692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.040913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.040963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.041090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.041245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.041271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.041401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.041595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.041639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.041816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.042000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.042051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.042178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.042302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.042340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.042498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.042682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.042709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.042857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.043029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.043058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.043212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.043365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.043392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.043574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.043771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.043815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.043964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.044101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.044128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.044290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.044417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.044449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.044604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.044801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.044830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.045029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.045205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.045233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.045401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.045593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.045622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.045803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.045974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.046019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.046178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.046356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.046383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.046565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.046704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.046732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.046885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.047077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.047121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.047245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.047440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.047475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.358 qpair failed and we were unable to recover it. 00:30:15.358 [2024-07-10 11:00:32.047620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.358 [2024-07-10 11:00:32.047811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.047841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.048025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.048199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.048227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.048383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.048577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.048620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.048775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.048961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.049009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.049227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.049382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.049409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.049591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.049755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.049799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.049962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.050159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.050203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.050360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.050488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.050517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.050675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.050872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.050917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.051094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.051264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.051291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.051437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.051588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.051632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.051829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.052026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.052070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.052227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.052355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.052381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.052539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.052710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.052760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.052941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.053113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.053140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.053291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.053453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.053480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.053631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.053831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.053876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.054010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.054164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.054191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.054323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.054471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.054501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.054712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.054884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.054930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.055084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.055215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.055242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.055399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.055552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.055596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.359 qpair failed and we were unable to recover it. 00:30:15.359 [2024-07-10 11:00:32.055803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.359 [2024-07-10 11:00:32.055959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.056002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.056162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.056344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.056375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.056528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.056723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.056768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.056976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.057148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.057175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.057353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.057550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.057594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.057757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.057930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.057975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.058130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.058253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.058279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.058420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.058627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.058672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.058879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.059098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.059144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.059301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.059459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.059486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.059660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.059859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.059902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.060065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.060213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.060244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.060422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.060602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.060632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.060791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.061018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.061062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.061217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.061348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.061375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.061548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.061708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.061752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.061913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.062118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.062145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.062294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.062497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.062542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.062698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.062889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.062918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.063061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.063217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.063244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.063391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.063551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.063596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.063775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.063939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.063983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.360 [2024-07-10 11:00:32.064139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.064280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.360 [2024-07-10 11:00:32.064306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.360 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.064478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.064647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.064673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.064795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.064951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.064977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.065155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.065289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.065315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.065471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.065638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.065682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.065858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.066018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.066073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.066225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.066381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.066409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.066574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.066751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.066795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.066977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.067166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.067198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.067346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.067515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.067561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.067705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.067920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.067964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.068092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.068238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.068265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.068445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.068670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.068714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.068918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.069086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.069112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.069273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.069401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.069447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.069656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.069809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.069836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.069973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.070118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.070146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.070299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.070473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.070504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.070706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.070900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.070930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.071067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.071187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.071213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.071370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.071517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.071562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.071743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.071911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.071957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.072084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.072204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.072231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.072386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.072575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.072621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.072766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.072963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.073007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.073141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.073267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.073293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.073418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.073582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.361 [2024-07-10 11:00:32.073627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.361 qpair failed and we were unable to recover it. 00:30:15.361 [2024-07-10 11:00:32.073803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.073969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.074013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.074147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.074298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.074325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.074500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.074655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.074681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.074838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.074999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.075026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.075181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.075305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.075332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.075460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.075636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.075666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.075854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.076051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.076097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.076253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.076384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.076411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.076611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.076805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.076851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.077035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.077224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.077252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.077409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.077622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.077668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.077855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.078060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.078104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.078232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.078354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.078381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.078577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.078770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.078814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.079021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.079164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.079190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.079308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.079439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.079468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.079643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.079841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.079872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.080046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.080220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.080247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.080398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.080545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.362 [2024-07-10 11:00:32.080590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.362 qpair failed and we were unable to recover it. 00:30:15.362 [2024-07-10 11:00:32.080771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.080962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.080991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.081182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.081329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.081358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.081561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.081761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.081805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.082003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.082174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.082203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.082364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.082543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.082573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.082761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.082949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.082993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.083121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.083281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.083308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.083495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.083685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.083728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.083891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.084040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.084068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.084211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.084387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.084414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.084606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.084798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.084843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.085013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.085155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.085182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.085339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.085520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.085572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.085710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.085909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.085954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.086152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.086375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.086409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.086614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.086764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.086795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.086967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.087105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.087134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.087317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.087502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.087530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.087662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.087831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.087875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.088016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.088166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.088210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.088385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.088570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.088597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.088725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.088906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.088937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.089106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.089267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.089307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.089465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.089588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.089628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.089827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.090008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.090038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.090213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.090354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.090384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.363 qpair failed and we were unable to recover it. 00:30:15.363 [2024-07-10 11:00:32.090568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.363 [2024-07-10 11:00:32.090742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.090772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.090918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.091082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.091112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.091282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.091437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.091482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.091614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.091780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.091809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.091950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.092147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.092177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.092361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.092543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.092571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.092778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.092948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.092978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.093150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.093301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.093330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.093491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.093623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.093649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.093873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.094044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.094074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.094215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.094386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.094414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.094577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.094716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.094746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.094935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.095105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.095134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.095276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.095422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.095472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.095629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.095773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.095801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.095967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.096138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.096168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.096311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.096489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.096516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.096675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.096914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.096947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.097137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.097319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.097349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.097555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.097738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.097781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.097916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.098109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.098141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.098312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.098489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.098518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.098678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.098824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.098853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.098992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.099195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.099221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.099381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.099588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.099618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.099757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.099889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.099919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.100080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.100275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.364 [2024-07-10 11:00:32.100305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.364 qpair failed and we were unable to recover it. 00:30:15.364 [2024-07-10 11:00:32.100476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.100630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.100674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.100807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.100992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.101019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.101172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.101348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.101377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.101537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.101691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.101717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.101893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.102088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.102118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.102257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.102406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.102442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.102645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.102797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.102823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.102956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.103133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.103174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.103372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.103543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.103573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.103758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.103928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.103957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.104121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.104316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.104345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.104489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.104702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.104729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.104911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.105062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.105089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.105237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.105352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.105379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.105559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.105783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.105810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.105996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.106201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.106231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.106447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.106610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.106640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.106778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.106959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.106986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.107137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.107303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.107334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.107500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.107646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.107675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.107808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.107978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.108007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.108182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.108336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.108385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.108572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.108733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.108776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.108945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.109139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.109169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.109304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.109434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.109461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.109597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.109767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.109797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.365 qpair failed and we were unable to recover it. 00:30:15.365 [2024-07-10 11:00:32.109994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.365 [2024-07-10 11:00:32.110174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.110200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.110333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.110513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.110556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.110769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.110925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.110951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.111135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.111310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.111336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.111490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.111618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.111661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.111861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.112031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.112066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.112262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.112457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.112487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.112656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.112778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.112806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.113036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.113169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.113195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.113336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.113507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.113538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.113739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.113910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.113939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.114111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.114270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.114300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.114468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.114658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.114687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.114890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.115038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.115067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.115200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.115394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.115432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.115630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.115758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.115794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.116001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.116173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.116203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.116401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.116554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.116585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.116779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.116945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.116974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.117148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.117315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.117344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.117512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.117672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.117701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.117874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.118055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.118083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.118243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.118419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.118458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.118591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.118750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.118779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.118944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.119106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.119135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.119314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.119445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.119478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.119637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.119792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.119819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.366 qpair failed and we were unable to recover it. 00:30:15.366 [2024-07-10 11:00:32.119977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.366 [2024-07-10 11:00:32.120135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.120162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.120316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.120481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.120511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.120676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.120878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.120908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.121088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.121242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.121268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.121401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.121588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.121615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.121738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.121923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.121966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.122146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.122299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.122325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.122520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.122700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.122741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.122889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.123070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.123097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.123259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.123439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.123469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.123643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.123796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.123838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.124018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.124158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.124189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.124365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.124545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.124590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.124766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.124963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.124992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.125165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.125357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.125400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.125555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.125707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.125750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.125920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.126078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.126104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.126284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.126450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.126480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.126677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.126873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.126903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.127265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.127493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.127521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.127688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.127821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.127848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.128079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.128242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.128268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.367 [2024-07-10 11:00:32.128451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.128648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.367 [2024-07-10 11:00:32.128677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.367 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.128859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.129015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.129043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.129236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.129404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.129440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.129593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.129745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.129772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.129974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.130145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.130176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.130345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.130511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.130541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.130735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.130928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.130957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.131125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.131295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.131324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.131486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.131654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.131685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.131894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.132084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.132113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.132292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.132422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.132459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.132663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.132874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.132900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.133054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.133214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.133241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.133499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.133669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.133698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.133892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.134074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.134118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.134303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.134507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.134538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.134733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.134883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.134912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.135113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.135281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.135310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.135515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.135663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.135705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.135848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.136019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.136049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.136181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.136332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.136358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.136511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.136667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.136693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.136849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.137021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.137051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.137247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.137390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.137422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.137613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.137757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.137800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.137975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.138124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.138151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.138331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.138501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.138532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.368 qpair failed and we were unable to recover it. 00:30:15.368 [2024-07-10 11:00:32.138741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.138908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.368 [2024-07-10 11:00:32.138938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.139113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.139317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.139343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.139548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.139715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.139744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.139907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.140107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.140136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.140331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.140526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.140556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.140751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.140880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.140909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.141064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.141217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.141243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.141449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.141595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.141624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.141785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.141945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.141974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.142153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.142309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.142354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.142505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.142700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.142729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.142930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.143081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.143107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.143232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.143415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.143448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.143608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.143771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.143800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.143970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.144136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.144165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.144317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.144494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.144521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.144670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.144881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.144910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.369 [2024-07-10 11:00:32.145064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.145188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.369 [2024-07-10 11:00:32.145214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.369 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.145370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.145548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.145575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.145708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.145872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.145900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.146065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.146235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.146279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.146464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.146615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.146654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.146877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.147179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.147238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.147440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.147585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.147622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.147798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.147977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.148043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.148238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.148444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.148474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.148643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.148793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.148820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.148949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.149069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.149097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.149273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.149528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.149556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.149680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.149843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.149871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.149996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.150162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.150189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.150375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.150535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.150562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.150722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.150852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.150880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.151058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.151236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.151280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.151423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.151605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.151632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.643 qpair failed and we were unable to recover it. 00:30:15.643 [2024-07-10 11:00:32.151784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.643 [2024-07-10 11:00:32.151960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.151989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.152194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.152392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.152422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.152635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.152787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.152815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.153027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.153258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.153284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.153447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.153599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.153625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.153782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.153961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.153992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.154174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.154357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.154383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.154557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.154715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.154741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.154901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.155076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.155106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.155277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.155484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.155514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.155679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.155802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.155828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.155984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.156136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.156163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.156330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.156538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.156574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.156727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.156858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.156885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.157042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.157225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.157252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.157462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.157618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.157650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.157783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.157931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.157957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.158106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.158245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.158275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.158454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.158606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.158633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.158815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.158935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.158978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.159149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.159318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.159347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.159531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.159709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.159751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.159904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.160069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.160095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.160242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.160385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.160414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.160592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.160762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.160791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.160989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.161143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.161169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.161330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.161529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.161556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.644 qpair failed and we were unable to recover it. 00:30:15.644 [2024-07-10 11:00:32.161708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.644 [2024-07-10 11:00:32.161996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.162045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.162214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.162346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.162372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.162535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.162685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.162712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.162937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.163112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.163170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.163374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.163542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.163569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.163723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.163853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.163879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.164075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.164234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.164262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.164434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.164639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.164668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.164837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.165080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.165134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.165361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.165496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.165524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.165650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.165797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.165823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.166022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.166169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.166198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.166394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.166572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.166599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.166785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.166906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.166932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.167087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.167239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.167266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.167451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.167631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.167673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.167860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.167987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.168013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.168162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.168322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.168348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.168540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.168794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.168853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.169034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.169220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.169246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.169404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.169584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.169614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.169812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.169963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.169990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.170138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.170285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.170328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.170539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.170686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.170713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.170888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.171089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.171118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.171270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.171469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.171500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.645 [2024-07-10 11:00:32.171631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.171775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.645 [2024-07-10 11:00:32.171804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.645 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.171989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.172110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.172136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.172315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.172516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.172545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.172725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.172906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.172933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.173109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.173273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.173302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.173500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.173630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.173674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.173842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.174035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.174065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.174207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.174374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.174403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.174587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.174737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.174780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.174971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.175110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.175139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.175309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.175491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.175519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.175670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.175846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.175903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.176095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.176236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.176267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.176471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.176624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.176655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.176807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.176926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.176953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.177144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.177292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.177319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.177517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.177663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.177692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.177867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.178025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.178052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.178177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.178329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.178356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.178539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.178742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.178772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.178976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.179094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.179137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.179328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.179531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.179562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.179733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.179902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.179930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.180101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.180256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.180300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.180438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.180580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.180610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.180780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.180976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.181002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.646 [2024-07-10 11:00:32.181158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.181360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.646 [2024-07-10 11:00:32.181389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.646 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.181571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.181727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.181755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.181960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.182155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.182184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.182349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.182503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.182557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.182757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.182950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.182979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.183157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.183312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.183339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.183566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.183736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.183766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.183934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.184126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.184152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.184312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.184537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.184564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.184723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.184878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.184920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.185083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.185275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.185306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.185437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.185617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.185644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.185790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.185917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.185961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.186146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.186275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.186302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.186452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.186606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.186649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.186823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.186968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.186994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.187184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.187363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.187390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.187580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.187754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.187780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.187963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.188147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.188176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.188371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.188539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.188569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.188741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.188877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.188906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.189077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.189231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.189273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.189468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.189620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.189646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.189798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.190017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.190067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.190236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.190406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.190441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.190606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.190790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.190816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.190972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.191182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.191241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.647 qpair failed and we were unable to recover it. 00:30:15.647 [2024-07-10 11:00:32.191431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.647 [2024-07-10 11:00:32.191561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.191589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.191790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.191963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.191993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.192168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.192325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.192355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.192543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.192675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.192702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.192881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.193108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.193159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.193332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.193527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.193554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.193737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.193891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.193934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.194100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.194280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.194306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.194447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.194625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.194667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.194844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.194998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.195043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.195227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.195392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.195440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.195610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.195733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.195764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.195898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.196092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.196133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.196323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.196517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.196546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.196701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.196826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.196854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.197008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.197168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.197194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.197374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.197572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.197600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.197753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.197907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.197934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.198085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.198262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.198288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.198433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.198579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.648 [2024-07-10 11:00:32.198608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.648 qpair failed and we were unable to recover it. 00:30:15.648 [2024-07-10 11:00:32.198814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.198968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.198994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.199146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.199316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.199345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.199505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.199694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.199741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.199888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.200071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.200098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.200251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.200420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.200458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.200656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.200836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.200862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.201014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.201168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.201196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.201353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.201517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.201547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.201733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.201913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.201940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.202167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.202322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.202363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.202563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.202707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.202734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.202942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.203220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.203278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.203451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.203649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.203678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.203822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.203973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.204000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.204196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.204343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.204369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.204574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.204705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.204733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.204887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.205074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.205101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.205291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.205474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.205518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.205724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.205927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.205984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.206157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.206310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.206336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.206535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.206700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.206729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.206924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.207105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.207131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.207316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.207454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.207484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.207680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.207922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.207951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.208121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.208254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.208283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.208481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.208647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.208676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.649 [2024-07-10 11:00:32.208843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.208987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.649 [2024-07-10 11:00:32.209016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.649 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.209207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.209337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.209363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.209518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.209700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.209729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.209869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.210075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.210102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.210235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.210401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.210448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.210630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.210830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.210860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.210993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.211167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.211194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.211344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.211516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.211546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.211721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.211871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.211914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.212108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.212269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.212298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.212467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.212602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.212632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.212829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.212978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.213005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.213185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.213348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.213377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.213589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.213749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.213776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.213930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.214052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.214079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.214232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.214410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.214448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.214615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.214787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.214822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.214964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.215113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.215155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.215322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.215495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.215526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.215712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.215866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.215892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.216078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.216200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.216227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.216451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.216683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.216737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.216932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.217127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.217154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.217275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.217419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.217451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.217600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.217740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.217769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.217908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.218104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.218133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.218310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.218508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.218538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.650 qpair failed and we were unable to recover it. 00:30:15.650 [2024-07-10 11:00:32.218713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.650 [2024-07-10 11:00:32.218870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.218896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.219073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.219227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.219253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.219436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.219567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.219593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.219748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.219899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.219925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.220080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.220251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.220280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.220438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.220610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.220653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.220798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.221004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.221030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.221189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.221331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.221360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.221535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.221695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.221721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.221878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.222030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.222057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.222214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.222364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.222391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.222533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.222682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.222708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.222860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.223044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.223073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.223214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.223370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.223397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.223523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.223688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.223715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.223897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.224130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.224182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.224379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.224555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.224584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.224758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.224876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.224902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.225057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.225203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.225232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.225374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.225518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.225549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.225732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.225882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.225909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.226055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.226186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.226217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.226430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.226558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.226585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.226766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.226919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.226961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.227136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.227305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.227335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.227534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.227824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.227878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.228054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.228260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.228289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.651 qpair failed and we were unable to recover it. 00:30:15.651 [2024-07-10 11:00:32.228495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.651 [2024-07-10 11:00:32.228655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.228681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.228850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.229012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.229041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.229253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.229413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.229448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.229623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.229843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.229895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.230096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.230293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.230322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.230494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.230652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.230695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.230893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.231135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.231191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.231396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.231557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.231583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.231716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.231895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.231937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.232079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.232250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.232279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.232451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.232624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.232653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.232831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.232997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.233023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.233147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.233279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.233306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.233509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.233686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.233727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.233883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.234004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.234032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.234180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.234327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.234357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.234541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.234711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.234740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.234949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.235116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.235145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.235313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.235478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.235507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.235677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.235816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.235845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.236000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.236174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.236218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.236381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.236550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.236579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.236756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.236927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.236956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.237135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.237280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.237310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.237499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.237675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.237710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.237839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.238012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.238040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.238216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.238371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.652 [2024-07-10 11:00:32.238414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.652 qpair failed and we were unable to recover it. 00:30:15.652 [2024-07-10 11:00:32.238572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.238718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.238746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.653 qpair failed and we were unable to recover it. 00:30:15.653 [2024-07-10 11:00:32.238906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.239116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.239169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.653 qpair failed and we were unable to recover it. 00:30:15.653 [2024-07-10 11:00:32.239348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.239516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.239547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.653 qpair failed and we were unable to recover it. 00:30:15.653 [2024-07-10 11:00:32.239730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.239883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.239909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.653 qpair failed and we were unable to recover it. 00:30:15.653 [2024-07-10 11:00:32.240092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.240228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.240258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.653 qpair failed and we were unable to recover it. 00:30:15.653 [2024-07-10 11:00:32.240402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.240567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.240609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.653 qpair failed and we were unable to recover it. 00:30:15.653 [2024-07-10 11:00:32.240817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.240972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.653 [2024-07-10 11:00:32.241000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.241189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.241350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.241379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.241602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.241777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.241805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.242002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.242171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.242199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.242375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.242525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.242551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.242769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.243036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.243090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.243284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.243504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.243534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.243704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.243881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.243910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.244078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.244227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.244268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.244443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.244656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.244682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.244804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.244978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.245007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.245184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.245366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.245409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.245583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.245769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.245839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.246030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.246238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.246289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.246494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.246621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.246648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.246773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.246929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.246954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.654 qpair failed and we were unable to recover it. 00:30:15.654 [2024-07-10 11:00:32.247131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.654 [2024-07-10 11:00:32.247260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.247288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.247457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.247616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.247658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.247834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.247980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.248006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.248212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.248367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.248407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.248603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.248727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.248752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.248947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.249181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.249232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.249433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.249603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.249632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.249842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.249995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.250045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.250220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.250409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.250445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.250642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.250832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.250860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.251025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.251177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.251220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.251413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.251581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.251609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.251759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.251899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.251927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.252096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.252248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.252289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.252423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.252588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.252614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.252792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.253007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.253064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.253265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.253445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.253472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.253598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.253784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.253810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.254008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.254290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.254347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.254550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.254750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.254779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.254949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.255064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.255090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.255244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.255402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.255435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.255567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.255718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.255744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.255927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.256094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.256122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.256293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.256460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.256490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.256700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.256844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.256879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.257051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.257182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.257211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.655 qpair failed and we were unable to recover it. 00:30:15.655 [2024-07-10 11:00:32.257357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.655 [2024-07-10 11:00:32.257530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.257559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.257739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.257887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.257930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.258134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.258312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.258338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.258520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.258693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.258721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.258915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.259125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.259177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.259350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.259486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.259515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.259707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.259938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.259993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.260137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.260287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.260312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.260469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.260627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.260654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.260845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.261001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.261026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.261154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.261351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.261379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.261532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.261680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.261709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.261849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.262016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.262044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.262219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.262395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.262423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.262626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.262750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.262777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.262985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.263205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.263231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.263389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.263522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.263565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.263736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.263894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.263922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.264111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.264257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.264285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.264464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.264612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.264654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.264800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.264933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.264962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.265129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.265311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.265337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.265482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.265617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.265643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.265797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.265977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.266003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.266150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.266337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.266366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.266515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.266643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.266669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.266787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.266965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.266990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.656 qpair failed and we were unable to recover it. 00:30:15.656 [2024-07-10 11:00:32.267136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.656 [2024-07-10 11:00:32.267271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.267299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.267477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.267606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.267632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.267812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.267943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.267971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.268115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.268288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.268316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.268493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.268644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.268670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.268789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.268932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.268960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.269107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.269263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.269288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.269506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.269639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.269667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.269820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.269971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.270000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.270181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.270296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.270321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.270456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.270610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.270636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.270789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.270925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.270955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.271124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.271324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.271354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.271532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.271682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.271726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.271905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.272057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.272083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.272261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.272452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.272479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.272634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.272760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.272786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.272908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.273081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.273109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.273283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.273419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.273456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.273619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.273743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.273769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.273943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.274075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.274103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.274244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.274422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.274455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.274588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.274715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.274745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.274931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.275102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.275127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.275297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.275442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.275470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.275618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.275761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.275803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.275982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.276138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.276164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.657 qpair failed and we were unable to recover it. 00:30:15.657 [2024-07-10 11:00:32.276344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.657 [2024-07-10 11:00:32.276484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.276514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.276664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.276794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.276820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.276949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.277125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.277154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.277327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.277480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.277506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.277665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.277797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.277823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.277951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.278100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.278128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.278276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.278453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.278483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.278654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.278782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.278809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.278977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.279111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.279140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.279297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.279493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.279523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.279670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.279830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.279872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.280047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.280212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.280241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.280441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.280568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.280594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.280768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.280917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.280943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.281081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.281212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.281241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.281417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.281556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.281582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.281816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.281958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.281988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.282154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.282328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.282354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.282494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.282644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.282672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.282852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.282983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.283009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.283147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.283306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.283334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.283538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.283680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.283721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.283869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.284002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.284028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.284175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.284328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.284366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.284539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.284694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.284721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.284901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.285082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.285109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.285266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.285418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.285453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.658 qpair failed and we were unable to recover it. 00:30:15.658 [2024-07-10 11:00:32.285605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.658 [2024-07-10 11:00:32.285764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.285794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.285974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.286121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.286164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.286325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.286487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.286516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.286700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.286824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.286867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.287065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.287220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.287246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.287411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.287567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.287595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.287769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.288010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.288036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.288192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.288325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.288368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.288522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.288692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.288721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.288892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.289035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.289065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.289210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.289338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.289364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.289545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.289753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.289779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.289906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.290107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.290136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.290312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.290462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.290509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.290658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.290850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.290876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.291004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.291156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.291182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.291382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.291568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.291594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.291755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.291899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.291928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.292103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.292233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.292262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.292411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.292538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.292568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.659 qpair failed and we were unable to recover it. 00:30:15.659 [2024-07-10 11:00:32.292747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.659 [2024-07-10 11:00:32.292923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.292950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.293103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.293307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.293336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.293516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.293675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.293703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.293853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.294000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.294027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.294183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.294377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.294403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.294587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.294762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.294791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.294938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.295145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.295172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.295330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.295464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.295493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.295669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.295837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.295882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.296067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.296272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.296313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.296484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.296681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.296725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.296919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.297042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.297069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.297223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.297354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.297381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.297543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.297713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.297743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.297911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.298074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.298119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.298250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.298402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.298435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.298615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.298835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.298865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.299029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.299170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.299198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.299372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.299580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.299625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.299799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.300003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.300047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.300202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.300352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.300379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.300568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.300769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.300813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.300989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.301133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.301160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.301314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.301518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.301562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.301702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.301921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.301965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.302128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.302297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.660 [2024-07-10 11:00:32.302324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.660 qpair failed and we were unable to recover it. 00:30:15.660 [2024-07-10 11:00:32.302500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.302692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.302736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.302911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.303057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.303085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.303266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.303440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.303492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.303656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.303872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.303917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.304054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.304184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.304211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.304389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.304549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.304594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.304776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.304945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.304990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.305144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.305270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.305298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.305498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.305695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.305745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.305911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.306076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.306103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.306226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.306378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.306405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.306630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.306795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.306839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.307029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.307214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.307241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.307402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.307553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.307597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.307777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.307999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.308042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.308221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.308372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.308399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.308580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.308803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.308853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.309054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.309196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.309223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.309374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.309547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.309591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.309741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.309918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.309966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.310150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.310311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.310338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.310521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.310756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.310800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.310949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.311078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.311104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.311258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.311448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.311475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.311660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.311858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.311902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.312056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.312234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.312261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.661 qpair failed and we were unable to recover it. 00:30:15.661 [2024-07-10 11:00:32.312420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.661 [2024-07-10 11:00:32.312582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.312627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.312801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.313005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.313033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.313220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.313374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.313401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.313559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.313722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.313767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.313972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.314119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.314145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.314276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.314444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.314499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.314643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.314802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.314846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.315055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.315226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.315253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.315390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.315547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.315574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.315763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.315910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.315936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.316069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.316197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.316224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.316353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.316529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.316559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.316767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.316940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.316987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.317170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.317326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.317352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.317528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.317691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.317736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.317945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.318137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.318181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.318336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.318475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.318505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.318692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.318899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.318926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.319084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.319234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.319260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.319415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.319599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.319628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.319812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.320035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.320079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.320238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.320395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.320422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.320636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.320827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.320871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.321051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.321201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.321227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.321348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.321489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.321518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.321687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.321877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.321921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.662 qpair failed and we were unable to recover it. 00:30:15.662 [2024-07-10 11:00:32.322056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.322235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.662 [2024-07-10 11:00:32.322262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.322421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.322588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.322633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.322826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.323048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.323092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.323276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.323401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.323434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.323617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.323853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.323897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.324089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.324258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.324284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.324441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.324623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.324667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.324823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.325020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.325064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.325214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.325395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.325422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.325607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.325788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.325832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.326013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.326163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.326189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.326345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.326513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.326558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.326719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.326908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.326952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.327086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.327216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.327242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.327365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.327510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.327554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.327703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.327897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.327941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.328099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.328244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.328270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.328399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.328585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.328629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.328831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.328973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.329001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.329183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.329340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.329368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.329548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.329720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.329764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.329967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.330122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.330148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.330270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.330401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.330439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.330610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.330767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.330792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.330942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.331081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.331109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.331268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.331413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.331446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.663 qpair failed and we were unable to recover it. 00:30:15.663 [2024-07-10 11:00:32.331594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.331743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.663 [2024-07-10 11:00:32.331771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.331948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.332112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.332155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.332340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.332540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.332586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.332733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.332927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.332970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.333121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.333289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.333316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.333491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.333724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.333750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.333909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.334078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.334106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.334245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.334364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.334391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.334568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.334763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.334807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.334976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.335121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.335148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.335304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.335480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.335508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.335698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.335885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.335929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.336118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.336263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.336289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.336416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.336637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.336681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.336896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.337042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.337068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.337221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.337398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.337429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.337583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.337762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.337810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.338017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.338182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.338209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.338392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.338599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.338643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.338790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.338988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.339031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.339185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.339343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.339368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.339500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.339655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.339703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.339875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.340067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.664 [2024-07-10 11:00:32.340111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.664 qpair failed and we were unable to recover it. 00:30:15.664 [2024-07-10 11:00:32.340259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.340442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.340469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.340646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.340837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.340881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.341062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.341207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.341233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.341387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.341571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.341616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.341794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.341981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.342023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.342144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.342303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.342329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.342529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.342715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.342758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.342936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.343133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.343159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.343336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.343453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.343479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.343629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.343851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.343892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.344047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.344205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.344232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.344414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.344576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.344619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.344791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.345011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.345056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.345204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.345358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.345389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.345546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.345719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.345762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.345901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.346103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.346129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.346258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.346452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.346479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.346660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.346831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.346876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.347079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.347245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.347271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.347439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.347643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.347687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.347831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.348016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.348059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.348213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.348332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.348359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.348564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.348788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.348833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.349011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.349189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.349220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.349350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.349528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.665 [2024-07-10 11:00:32.349558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.665 qpair failed and we were unable to recover it. 00:30:15.665 [2024-07-10 11:00:32.349727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.349900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.349944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.350149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.350319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.350346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.350497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.350721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.350765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.350942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.351175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.351201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.351356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.351541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.351585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.351737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.351893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.351935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.352111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.352317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.352342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.352523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.352721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.352764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.352977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.353198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.353246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.353411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.353619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.353663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.353878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.354237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.354291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.354485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.354662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.354706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.354909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.355065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.355108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.355267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.355420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.355452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.355581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.355735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.355780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.355956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.356179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.356223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.356349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.356520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.356565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.356751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.356974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.357022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.357196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.357351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.357383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.357553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.357716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.357760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.357939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.358113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.358140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.358321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.358483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.358511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.358664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.358833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.358876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.359052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.359224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.359251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.359405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.359606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.359650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.359835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.359983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.360011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.666 [2024-07-10 11:00:32.360143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.360299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.666 [2024-07-10 11:00:32.360325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.666 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.360499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.360694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.360746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.360921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.361097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.361123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.361284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.361432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.361487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.361646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.361867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.361912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.362102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.362282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.362308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.362462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.362636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.362679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.362879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.363069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.363110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.363260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.363406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.363440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.363599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.363805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.363848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.364053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.364249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.364275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.364489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.364695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.364747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.364956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.365176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.365231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.365419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.365636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.365682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.365888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.366148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.366191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.366374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.366509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.366535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.366721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.366952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.366996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.367206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.367376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.367404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.367623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.367816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.367858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.368065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.368264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.368290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.368421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.368609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.368653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.368806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.369002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.369046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.369201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.369357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.369383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.369535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.369707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.369733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.369885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.370068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.370096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.370222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.370410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.370456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.370630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.370793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.667 [2024-07-10 11:00:32.370837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.667 qpair failed and we were unable to recover it. 00:30:15.667 [2024-07-10 11:00:32.371007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.371202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.371228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.371376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.371557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.371602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.371779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.371973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.372017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.372166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.372351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.372377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.372523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.372749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.372792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.372979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.373176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.373203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.373329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.373504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.373548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.373754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.373976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.374020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.374188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.374371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.374398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.374562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.374736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.374781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.374926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.375072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.375100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.375279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.375456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.375491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.375648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.375855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.375898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.376075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.376219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.376248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.376407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.376585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.376630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.376855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.377052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.377096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.377254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.377437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.377476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.377659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.377892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.377936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.378126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.378317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.378343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.378519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.378730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.378772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.378964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.379159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.379185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.379339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.379465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.379492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.379637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.379852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.379896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.380103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.380247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.380276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.380495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.380685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.380736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.380858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.381015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.381041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.668 qpair failed and we were unable to recover it. 00:30:15.668 [2024-07-10 11:00:32.381201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.381383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.668 [2024-07-10 11:00:32.381409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.381611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.381810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.381839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.382036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.382233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.382260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.382421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.382604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.382649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.382804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.383024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.383069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.383251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.383402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.383434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.383628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.383832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.383875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.384018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.384207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.384250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.384436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.384600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.384626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.384788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.384945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.384989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.385211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.385440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.385491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.385626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.385813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.385839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.385991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.386148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.386174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.386352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.386494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.386521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.386642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.386843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.386873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.387035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.387205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.387235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.387440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.387595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.387621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.387784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.387955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.387985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.388178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.388345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.388374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.388601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.388733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.388760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.388980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.389169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.389203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.669 [2024-07-10 11:00:32.389391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.389582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.669 [2024-07-10 11:00:32.389609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.669 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.389729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.389887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.389931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.390098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.390396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.390477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.390629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.390822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.390850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.391123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.391306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.391332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.391494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.391673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.391699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.391882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.392049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.392078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.392245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.392413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.392449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.392619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.392831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.392857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.393007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.393240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.393306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.393473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.393650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.393676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.393827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.394116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.394180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.394379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.394582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.394609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.394764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.394947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.394976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.395269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.395458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.395502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.395657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.395815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.395841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.396026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.396194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.396222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.396377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.396533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.396559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.396727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.396897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.396926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.397264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.397476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.397506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.397638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.397833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.397898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.398077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.398232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.398260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.398471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.398627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.670 [2024-07-10 11:00:32.398653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.670 qpair failed and we were unable to recover it. 00:30:15.670 [2024-07-10 11:00:32.398846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.399130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.399188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.399386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.399533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.399560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.399686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.399817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.399845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.400127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.400307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.400336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.400517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.400699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.400743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.401011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.401233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.401294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.401435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.401609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.401635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.401813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.401955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.401984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.402142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.402355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.402384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.402574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.402750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.402784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.402940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.403094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.403121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.403270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.403429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.403456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.403576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.403715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.403744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.403942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.404092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.404135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.404308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.404488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.404514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.404663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.404861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.404890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.405061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.405252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.405281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.405454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.405591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.405618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.405791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.405959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.405988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.406186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.406324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.406353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.406560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.406732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.406761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.406943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.407124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.407150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.407340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.407524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.407550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.407708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.407831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.407858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.407991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.408152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.408182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.671 qpair failed and we were unable to recover it. 00:30:15.671 [2024-07-10 11:00:32.408355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.671 [2024-07-10 11:00:32.408544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.408570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.408704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.408836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.408865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.409044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.409199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.409226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.409393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.409594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.409620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.409784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.409940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.409968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.410146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.410339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.410366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.410531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.410660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.410687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.410842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.410963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.410991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.411152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.411312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.411342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.411496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.411664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.411698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.411848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.412026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.412053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.412232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.412393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.412422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.412634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.412823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.412852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.413036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.413191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.413218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.413402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.413600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.413630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.413815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.414098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.414157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.414333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.414485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.414512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.414669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.414831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.414858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.415068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.415271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.415297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.415450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.415623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.415651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.415790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.415949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.415979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.416136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.416267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.416296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.416470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.416591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.416654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.672 qpair failed and we were unable to recover it. 00:30:15.672 [2024-07-10 11:00:32.416834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.672 [2024-07-10 11:00:32.417030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.417060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.417226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.417417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.417453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.417603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.417762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.417790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.417956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.418102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.418144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.418315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.418473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.418503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.418704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.418862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.418888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.419060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.419227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.419256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.419416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.419552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.419580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.419732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.419851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.419877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.420094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.420300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.420331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.420486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.420614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.420640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.420820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.420986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.421015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.421208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.421375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.421401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.421573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.421775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.421840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.422014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.422166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.422207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.422367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.422537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.422567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.422738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.422929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.422994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.423168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.423382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.423411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.423606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.423771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.423800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.423959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.424119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.424148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.424357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.424513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.424539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.424717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.424959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.424988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.425178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.425374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.425403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.425597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.425747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.425773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.425947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.426156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.426185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.673 [2024-07-10 11:00:32.426381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.426584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.673 [2024-07-10 11:00:32.426613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.673 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.426788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.427029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.427082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.427296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.427449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.427476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.427636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.427774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.427804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.427982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.428112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.428153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.428357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.428536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.428566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.428737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.428886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.428913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.429089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.429210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.429255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.429415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.429588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.429618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.429785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.429955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.429984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.430159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.430274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.430300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.430482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.430653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.430682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.430827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.430961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.430991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.431162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.431313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.431356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.431522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.431731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.431760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.431923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.432104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.432134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.432335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.432551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.432605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.432781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.432926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.432955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.433119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.433263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.433292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.674 [2024-07-10 11:00:32.433495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.433663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.674 [2024-07-10 11:00:32.433692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.674 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.433838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.434045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.434071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.434217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.434370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.434396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.434593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.434763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.434792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.434984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.435103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.435130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.435275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.435455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.435500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.435652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.435830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.435857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.436039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.436206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.436235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.436408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.436574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.436603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.436802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.437011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.437063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.437199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.437331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.437359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.437515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.437696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.437722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.437909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.438061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.438087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.438241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.438453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.438479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.438605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.438812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.438841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.438983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.439138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.439165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.439365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.439527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.439562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.439752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.439884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.439913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.440116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.440312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.440341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.440521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.440682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.440708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.440888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.441057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.441086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.441237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.441389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.441414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.441595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.441730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.441769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.675 qpair failed and we were unable to recover it. 00:30:15.675 [2024-07-10 11:00:32.441937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.675 [2024-07-10 11:00:32.442119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.442146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.442298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.442422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.442465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.442596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.442746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.442775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.442954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.443133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.443176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.443358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.443529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.443556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.443705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.443861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.443903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.444036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.444199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.444228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.444421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.444626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.444655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.444860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.445042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.445084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.445249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.445441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.445470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.445651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.445838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.445880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.446076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.446272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.446299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.446449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.446599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.446625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.446773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.446920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.446963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.447167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.447383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.447409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.447576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.447764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.447805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.447980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.448141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.448177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.448329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.448445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.448473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.448642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.448818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.448850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.449029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.449224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.449254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.449389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.449570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.449600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.449770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.449964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.449996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.450148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.450307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.450337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.450472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.450661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.450689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.450903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.451156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.676 [2024-07-10 11:00:32.451216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.676 qpair failed and we were unable to recover it. 00:30:15.676 [2024-07-10 11:00:32.451431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.677 [2024-07-10 11:00:32.451631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.677 [2024-07-10 11:00:32.451663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.677 qpair failed and we were unable to recover it. 00:30:15.677 [2024-07-10 11:00:32.451815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-07-10 11:00:32.451964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.961 [2024-07-10 11:00:32.451993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.961 qpair failed and we were unable to recover it. 00:30:15.961 [2024-07-10 11:00:32.452161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.452303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.452333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.452504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.452697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.452736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.452887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.453102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.453146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.453319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.453546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.453588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.453758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.453905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.453960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.454191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.454364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.454417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.454646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.454921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.454951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.455144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.455317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.455349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.455496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.455666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.455693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.455877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.456079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.456113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.456311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.456480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.456511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.456705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.456881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.456932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.457147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.457303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.457347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.457522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.457671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.457716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.457864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.458033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.458063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.458257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.458443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.458475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.458677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.458793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.458836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.458998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.459142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.459176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.459345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.459524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.459554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.459757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.459884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.459910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.460033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.460209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.460238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.460383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.460565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.460595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.460796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.460972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.461001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.461181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.461324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.461351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.461507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.461633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.461666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.461814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.462012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.462041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.462232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.462388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.462415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.462560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.462758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.462784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.462947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.463124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.463151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.463323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.463494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.463525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.463731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.463910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.463936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.464093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.464274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.464300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.464503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.464647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.464682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.464859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.465017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.465043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.465210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.465338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.465369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.465596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.465839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.465901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.466076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.466257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.466301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.466455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.466615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.466642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.466779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.466929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.466956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.467145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.467303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.962 [2024-07-10 11:00:32.467332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.962 qpair failed and we were unable to recover it. 00:30:15.962 [2024-07-10 11:00:32.467473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.467659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.467701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.467874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.468051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.468077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.468230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.468435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.468461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.468617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.468813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.468881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.469073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.469275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.469303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.469441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.469578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.469608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.469773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.469907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.469933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.470088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.470262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.470293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.470477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.470660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.470703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.470885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.471117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.471182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.471369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.471553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.471596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.471733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.471878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.471908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.472106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.472231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.472257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.472378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.472522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.472549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.472722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.472849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.472879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.473049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.473247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.473276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.473444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.473650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.473679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.473850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.474089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.474150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.474333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.474480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.474525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.474720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.474900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.474929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.475104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.475308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.475337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.475481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.475638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.475680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.475841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.476050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.476080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.476248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.476435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.476465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.476640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.476771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.476799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.476962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.477220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.477272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.477446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.477650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.477679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.477856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.478009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.478052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.478248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.478415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.478470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.478640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.478832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.478897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.479073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.479228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.479254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.479439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.479644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.479671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.479827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.479976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.480007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.480206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.480379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.480408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.480583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.480749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.480778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.480956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.481109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.481136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.481290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.481442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.481470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.481603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.481781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.481808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.481992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.482162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.482191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.963 qpair failed and we were unable to recover it. 00:30:15.963 [2024-07-10 11:00:32.482351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.482513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.963 [2024-07-10 11:00:32.482559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.482687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.482833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.482859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.483038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.483219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.483249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.483451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.483655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.483684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.483838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.483996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.484022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.484216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.484398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.484447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.484628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.484774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.484818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.484944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.485124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.485151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.485281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.485442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.485473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.485655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.485849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.485878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.486094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.486252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.486294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.486462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.486619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.486645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.486846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.487023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.487052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.487221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.487380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.487409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.487589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.487768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.487794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.487974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.488149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.488178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.488379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.488564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.488594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.488776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.488958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.489000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.489131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.489327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.489356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.489537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.489669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.489713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.489854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.490131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.490183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.490370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.490536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.490563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.490718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.490872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.490898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.491019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.491197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.491238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.491382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.491554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.491585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.491747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.491931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.491957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.492082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.492233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.492259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.492387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.492514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.492551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.492683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.492891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.492920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.493113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.493291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.493317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.493471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.493659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.493686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.493842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.494020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.494049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.494242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.494393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.494422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.494648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.494775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.494803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.494921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.495118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.495144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.495322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.495494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.495546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.495712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.495954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.496010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.496213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.496381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.496410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.964 qpair failed and we were unable to recover it. 00:30:15.964 [2024-07-10 11:00:32.496558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.496719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.964 [2024-07-10 11:00:32.496748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.496926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.497118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.497145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.497273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.497470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.497501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.497699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.497868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.497898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.498078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.498206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.498233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.498393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.498555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.498582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.498761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.498930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.498959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.499129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.499308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.499337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.499508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.499659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.499688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.499826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.500009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.500051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.500209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.500404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.500439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.500635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.500798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.500827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.500958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.501101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.501136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.501277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.501463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.501490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.501668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.501972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.502001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.502179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.502320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.502350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.502526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.502751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.502822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.503052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.503231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.503262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.503395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.503568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.503601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.503782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.503952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.503981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.504141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.504277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.504313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.504558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.504712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.504740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.504900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.505124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.505150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.505353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.505510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.505537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.505716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.505927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.505953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.506130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.506292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.506321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.506612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.506967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.507016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.507200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.507401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.507435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.507619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.507837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.507901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.508060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.508216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.508243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.508439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.508609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.508638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.508836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.509005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.509031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.509214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.509395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.509434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.509631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.509755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.509799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.509987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.510140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.510166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.510341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.510504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.510534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.510673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.510842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.510871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.511038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.511161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.511187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.511337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.511536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.511566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.511727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.511884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.511911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.965 qpair failed and we were unable to recover it. 00:30:15.965 [2024-07-10 11:00:32.512092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.965 [2024-07-10 11:00:32.512265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.512294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.512447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.512590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.512616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.512854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.513011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.513037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.513207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.513379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.513409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.513594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.513745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.513771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.513896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.514058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.514100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.514254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.514407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.514451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.514679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.514803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.514830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.515028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.515154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.515180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.515336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.515507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.515537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.515687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.515858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.515885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.516038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.516207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.516233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.516384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.516552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.516582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.516757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.516884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.516911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.517059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.517202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.517231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.517399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.517584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.517610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.517741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.517921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.517949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.518101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.518221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.518258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.518447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.518624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.518650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.518785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.518975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.519004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.519147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.519284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.519324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.519511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.519650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.519694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.519867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.520006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.520035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.520204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.520335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.520369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.520541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.520707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.520736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.520878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.521035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.521061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.521201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.521338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.521368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.521535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.521699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.521728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.521899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.522057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.522085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.522247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.522415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.522451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.522648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.522798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.522840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.523002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.523154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.523181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.523340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.523504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.523531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.523658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.523783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.523808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.523971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.524117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.524148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.524319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.524454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.524483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.524641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.524794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.524823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.524971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.525125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.525151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.525282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.525467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.525497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.525644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.525809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.525838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.525992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.526125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.526152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.526302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.526453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.526480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.526601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.526746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.526774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.526917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.527117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.527143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.527321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.527474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.527504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.527661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.527823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.527849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.528039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.528161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.528188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.528324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.528489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.528518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.528718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.528850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.528877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.529039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.529171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.529197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.529322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.529487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.529517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.966 [2024-07-10 11:00:32.529644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.529814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.966 [2024-07-10 11:00:32.529843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.966 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.529976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.530137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.530166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.530324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.530484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.530511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.530716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.530864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.530890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.531079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.531231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.531257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.531414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.531568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.531597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.531746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.531895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.531921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.532075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.532266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.532295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.532474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.532680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.532709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.532878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.533073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.533099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.533256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.533382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.533408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.533544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.533658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.533684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.533808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.533976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.534005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.534153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.534341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.534367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.534517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.534714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.534743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.534951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.535103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.535130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.535327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.535507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.535562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.535712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.535846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.535874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.536030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.536182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.536209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.536382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.536554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.536586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.536732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.536932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.536962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.537109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.537275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.537304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.537483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.537633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.537679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.537831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.538108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.538165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.538338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.538478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.538519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.538712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.538873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.538902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.539054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.539182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.539208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.539386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.539531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.539561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.539769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.539899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.539927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.540072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.540230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.540272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.540452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.540633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.540660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.540809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.540965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.540991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.541142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.541333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.541359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.541516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.541672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.541701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.541883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.542044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.542096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.542277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.542413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.542446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.542643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.542787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.542814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.542989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.543144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.543173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.543343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.543489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.543516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.543647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.543801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.543830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.543998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.544141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.544170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.544338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.544535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.544565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.544733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.544864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.544890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.545073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.545212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.545240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.545421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.545559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.545585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.545761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.545900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.545929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.546079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.546225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.546252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.546456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.546609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.967 [2024-07-10 11:00:32.546637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.967 qpair failed and we were unable to recover it. 00:30:15.967 [2024-07-10 11:00:32.546820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.546950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.546977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.547128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.547252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.547279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.547409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.547543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.547570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.547729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.547871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.547900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.548039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.548184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.548213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.548345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.548488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.548518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.548692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.548811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.548837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.548968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.549117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.549146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.549322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.549494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.549524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.549694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.549836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.549866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.550071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.550214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.550243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.550376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.550528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.550559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.550700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.550875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.550902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.551021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.551188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.551215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.551367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.551495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.551539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.551679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.551820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.551851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.552049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.552203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.552230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.552363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.552509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.552535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.552694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.552825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.552868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.552997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.553138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.553167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.553307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.553448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.553479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.553665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.553787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.553814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.553987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.554181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.554211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.554391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.554583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.554610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.554797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.554949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.554976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.555191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.555323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.555349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.555473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.555624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.555655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.555832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.555973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.556001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.556177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.556355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.556381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.556556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.556782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.556833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.556983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.557140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.557167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.557344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.557480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.557508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.557719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.557869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.557912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.558063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.558196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.558234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.558420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.558598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.558626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.558784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.558911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.558937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.559067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.559215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.559249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.559393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.559598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.559628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.559784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.559977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.560005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.560178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.560351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.560380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.560523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.560703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.560729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.560863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.561015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.561047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.561198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.561355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.561386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.561559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.561725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.561753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.561922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.562060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.562089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.562235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.562401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.562451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.562582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.562786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.562815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.562989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.563173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.563200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.563359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.563540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.563570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.968 [2024-07-10 11:00:32.563720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.563914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.968 [2024-07-10 11:00:32.563943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.968 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.564088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.564217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.564244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.564404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.564572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.564601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.564739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.564921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.564948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.565103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.565229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.565255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.565420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.565577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.565604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.565808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.565965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.565993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.566139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.566295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.566324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.566505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.566664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.566707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.566852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.567031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.567057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.567238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.567414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.567453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.567648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.567785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.567814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.567989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.568108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.568134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.568274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.568438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.568465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.568615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.568796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.568824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.569003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.569123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.569149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.569330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.569491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.569518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.569647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.569761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.569787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.569919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.570104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.570147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.570288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.570449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.570476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.570629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.570771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.570801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.570971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.571130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.571156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.571313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.571481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.571524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.571660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.571832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.571861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.572032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.572197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.572228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.572382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.572542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.572584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.572722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.572946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.572999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.573159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.573308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.573334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.573474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.573647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.573676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.573841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.573964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.573990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.574164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.574299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.574327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.574491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.574690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.574716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.574840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.575023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.575051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.575206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.575395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.575447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.575636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.575793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.575819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.575976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.576121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.576151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.576292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.576434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.576473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.576651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.576790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.576817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.576973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.577132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.577166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.577338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.577485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.577515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.577671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.577810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.577836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.577971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.578105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.578131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.578345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.578489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.578519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.578686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.578867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.578895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.579087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.579265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.579291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.579458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.579658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.579686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.579831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.579976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.580005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.580167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.580304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.580334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.580507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.580651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.580690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.580899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.581056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.581081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.581231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.581387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.581438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.969 qpair failed and we were unable to recover it. 00:30:15.969 [2024-07-10 11:00:32.581585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.581751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.969 [2024-07-10 11:00:32.581780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.581939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.582100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.582129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.582280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.582434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.582479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.582623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.582792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.582821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.582959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.583104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.583137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.583277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.583415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.583453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.583613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.583752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.583777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.583989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.584155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.584183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.584354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.584497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.584527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.584715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.584868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.584893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.585047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.585195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.585220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.585346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.585470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.585496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.585612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.585743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.585769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.585912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.586070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.586095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.586253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.586381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.586406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.586565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.586686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.586729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.586859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.587030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.587058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.587217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.587390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.587415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.587585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.587709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.587736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.587884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.588056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.588081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.588237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.588362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.588388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.588584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.588726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.588754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.588898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.589035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.589061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.589243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.589382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.589410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.589597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.589742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.589771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.589948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.590080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.590106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.590226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.590360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.590386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.590548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.590719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.590747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.590908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.591050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.591079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.591215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.591352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.591382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.591560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.591681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.591713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.591845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.592034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.592064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.592238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.592385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.592414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.592571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.592752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.592778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.592956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.593074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.593118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.593268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.593431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.593459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.593605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.593732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.593759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.593957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.594137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.594164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.594292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.594469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.594520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.594702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.594879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.594921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.595094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.595260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.595289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.595482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.595619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.595645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.595823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.595941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.595967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.596128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.596283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.596309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.596469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.596661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.596690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.596857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.597026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.597051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.597177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.597327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.597353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.970 qpair failed and we were unable to recover it. 00:30:15.970 [2024-07-10 11:00:32.597539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.970 [2024-07-10 11:00:32.597699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.597728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.597902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.598040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.598069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.598241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.598394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.598423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.598584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.598716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.598742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.598934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.599125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.599154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.599350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.599516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.599543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.599668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.599787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.599813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.599996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.600160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.600186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.600367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.600541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.600567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.600728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.600894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.600922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.601087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.601283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.601311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.601478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.601608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.601634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.601778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.601958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.602000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.602182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.602396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.602432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.602616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.602778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.602806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.603007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.603169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.603196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.603327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.603506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.603552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.603738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.603898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.603924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.604134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.604282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.604308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.604439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.604604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.604645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.604788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.604952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.604981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.605131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.605270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.605299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.605508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.605705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.605734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.605882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.606010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.606039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.606255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.606383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.606411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.606599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.606735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.606779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.606936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.607103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.607129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.607318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.607488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.607518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.607677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.607814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.607840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.608048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.608178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.608207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.608372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.608544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.608573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.608717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.608844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.608871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.609043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.609182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.609212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.609408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.609578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.609605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.609750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.609873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.609900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.610063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.610213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.610239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.610417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.610567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.610596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.610798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.610980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.611022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.611196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.611330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.611359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.611531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.611685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.611729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.611917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.612098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.612127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.612304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.612459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.612492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.612677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.612817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.612852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.613002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.613155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.613181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.613316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.613527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.613554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.613678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.613857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.613885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.614031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.614199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.614229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.614408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.614548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.614591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.614770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.614893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.614919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.615067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.615206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.615235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.615434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.615596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.615625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.971 [2024-07-10 11:00:32.615786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.615936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.971 [2024-07-10 11:00:32.615963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.971 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.616128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.616305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.616336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.616497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.616614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.616640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.616831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.616989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.617032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.617199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.617344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.617374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.617547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.617696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.617723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.617878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.618068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.618093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.618229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.618359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.618385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.618519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.618698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.618728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.618880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.619028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.619055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.619206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.619366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.619392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.619553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.619680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.619723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.619939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.620090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.620117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.620273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.620387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.620413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.620580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.620738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.620767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.620952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.621122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.621152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.621314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.621507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.621534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.621665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.621856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.621882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.622068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.622242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.622268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.622392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.622541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.622567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.622690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.622850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.622877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.622996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.623126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.623152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.623359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.623491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.623517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.623647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.623861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.623891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.624048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.624230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.624259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.624439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.624574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.624600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.624745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.624932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.624961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.625100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.625274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.625304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.625488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.625616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.625642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.625814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.625979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.626008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.626154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.626287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.626314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.626457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.626588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.626614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.626741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.626926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.626952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.627147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.627289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.627318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.627467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.627632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.627658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.627810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.628003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.628050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.628200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.628365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.628397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.628557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.628688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.628726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.628918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.629089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.629118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.629285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.629487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.629514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.629649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.629809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.629835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.630026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.630190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.630218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.630376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.630508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.630538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.630666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.630817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.630843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.630998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.631149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.631176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.631336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.631490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.631536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.631692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.631889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.631918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.632123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.632307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.632336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.632517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.632668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.632694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.632869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.633043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.633072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.633225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.633404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.633439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.633573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.633699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.633726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.972 qpair failed and we were unable to recover it. 00:30:15.972 [2024-07-10 11:00:32.633877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.972 [2024-07-10 11:00:32.634062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.634114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.973 qpair failed and we were unable to recover it. 00:30:15.973 [2024-07-10 11:00:32.634286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.634459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.634504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.973 qpair failed and we were unable to recover it. 00:30:15.973 [2024-07-10 11:00:32.634629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.634773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.634800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.973 qpair failed and we were unable to recover it. 00:30:15.973 [2024-07-10 11:00:32.634997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.635223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.635249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.973 qpair failed and we were unable to recover it. 00:30:15.973 [2024-07-10 11:00:32.635408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.635604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.635630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.973 qpair failed and we were unable to recover it. 00:30:15.973 [2024-07-10 11:00:32.635803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.635973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.635999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.973 qpair failed and we were unable to recover it. 00:30:15.973 [2024-07-10 11:00:32.636122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.636242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.636268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.973 qpair failed and we were unable to recover it. 00:30:15.973 [2024-07-10 11:00:32.636453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.636580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.636606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.973 qpair failed and we were unable to recover it. 00:30:15.973 [2024-07-10 11:00:32.636734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.636863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.636890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.973 qpair failed and we were unable to recover it. 00:30:15.973 [2024-07-10 11:00:32.637107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.637276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.637305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.973 qpair failed and we were unable to recover it. 00:30:15.973 [2024-07-10 11:00:32.637494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.973 [2024-07-10 11:00:32.637617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.637643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.637782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.637966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.637992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.638168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.638322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.638353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.638515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.638642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.638669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.638819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.638974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.639017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.639218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.639387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.639416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.639571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.639697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.639724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.639865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.640094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.640123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.640293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.640483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.640511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.640643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.640813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.640857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.641103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.641334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.641376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.641579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.641743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.641777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.642028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.642238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.642267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.642466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.642598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.642624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.642820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.643067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.643095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.643266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.643401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.643437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.643578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.643739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.643783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.643969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.644187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.644232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.644364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.644515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.644542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.644679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.644878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.644910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.645139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.645337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.645365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.645566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.645695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.645737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.645928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.646150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.646180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.646381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.646538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.646564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.646734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.646952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.646999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.647267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.647419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.647458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.647602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.647726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.647751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.647930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.648075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.648105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.648318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.648525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.648552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.648679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.648798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.648824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.649063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.649260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.649289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.649438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.649588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.649614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.649740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.649894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.649939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.650145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.650260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.650303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.650489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.650630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.650655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.650783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.650942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.650968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.651092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.651261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.651289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.651454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.651615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.651642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.651859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.652071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.652099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.652299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.652467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.652512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.652673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.652843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.652869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.653030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.653157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.653188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.653361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.653541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.653569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.653726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.653944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.653991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.654153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.654324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.654354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.654528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.654663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.654697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.654856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.655052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.655078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.655228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.655381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.655422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.655573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.655705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.655731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.655891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.656039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.656065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.656186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.656360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.656386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.974 qpair failed and we were unable to recover it. 00:30:15.974 [2024-07-10 11:00:32.656559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.974 [2024-07-10 11:00:32.656758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.656787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.657053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.657254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.657282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.657449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.657610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.657636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.657780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.657969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.658020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.658182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.658383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.658409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.658538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.658688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.658718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.658917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.659086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.659115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.659285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.659452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.659506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.659681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.659853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.659879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.659997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.660199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.660228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.660436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.660586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.660614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.660833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.661001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.661029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.661215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.661379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.661408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.661613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.661782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.661810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.661980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.662172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.662220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.662408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.662554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.662581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.662769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.662929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.662958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.663147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.663317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.663346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.663570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.663713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.663742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.663940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.664133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.664160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.664318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.664505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.664532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.664725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.664911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.664944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.665167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.665354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.665382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.665582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.665739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.665765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.665931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.666127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.666189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.666391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.666561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.666587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.666745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.666916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.666945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.667119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.667260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.667286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.667437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.667605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.667631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.667773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.668085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.668119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.668318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.668488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.668518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.668696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.668865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.668891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.669108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.669272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.669300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.669500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.669655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.669681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.669813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.669967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.669994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.670174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.670370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.670398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.670585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.670744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.670770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.670922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.671089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.671117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.671304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.671508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.671537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.671706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.671867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.671895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.672093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.672287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.672316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.672487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.672681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.672711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.672867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.673024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.673050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.673207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.673360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.673401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.673545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.673702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.673727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.673922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.674111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.674139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.674317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.674474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.674500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.674684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.674949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.674979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.675187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.675342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.675385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.675592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.675793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.675826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.675983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.676184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.676213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.975 [2024-07-10 11:00:32.676379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.676532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.975 [2024-07-10 11:00:32.676562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.975 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.676698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.676867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.676896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.677110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.677237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.677263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.677453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.677663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.677692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.677890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.678111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.678140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.678282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.678450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.678479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.678652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.678823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.678852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.679032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.679233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.679262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.679407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.679586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.679615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.679811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.679946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.679976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.680108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.680289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.680315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.680469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.680624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.680668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.680834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.681123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.681167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.681371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.681518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.681547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.681724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.681861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.681889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.682086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.682218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.682245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.682368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.682558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.682587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.682749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.682915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.682943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.683107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.683313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.683340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.683494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.683648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.683675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.683826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.684091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.684117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.684277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.684458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.684493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.684667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.684844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.684873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.685052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.685225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.685258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.685454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.685620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.685648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.685769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.685906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.685935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.686107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.686274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.686302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.686473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.686623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.686649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.686819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.687034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.687060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.687260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.687391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.687419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.687559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.687722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.687752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.687952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.688101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.688130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.688296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.688443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.688472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.688656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.688825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.688851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.689059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.689268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.689293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.689451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.689606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.689633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.689768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.689990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.690019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.690230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.690442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.690471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.690614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.690776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.690805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.691000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.691201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.691229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.691439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.691620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.691646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.691826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.691965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.691995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.692152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.692315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.692343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.692550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.692695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.692721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.692906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.693045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.693073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.693218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.693362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.693388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.693559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.693747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.693776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.976 [2024-07-10 11:00:32.693932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.694085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.976 [2024-07-10 11:00:32.694111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.976 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.694239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.694414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.694453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.694662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.694866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.694913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.695109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.695284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.695310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.695464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.695633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.695671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.695991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.696217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.696245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.696443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.696606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.696634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.696799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.697015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.697061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.697287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.697486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.697515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.697712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.697874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.697920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.698090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.698258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.698287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.698462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.698597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.698626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.698802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.698991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.699019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.699219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.699416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.699453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.699653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.699846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.699890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.700067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.700247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.700273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.700434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.700627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.700657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.700832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.701098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.701127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.701316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.701474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.701501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.701704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.701934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.701987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.702156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.702304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.702341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.702545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.702663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.702689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.702867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.703023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.703065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.703264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.703443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.703481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.703608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.703762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.703788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.704029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.704234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.704259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.704460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.704644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.704670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.704869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.705141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.705187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.705340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.705497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.705540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.705742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.705877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.705910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.706061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.706265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.706290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.706446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.706597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.706627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.706829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.707019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.707062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.707234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.707431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.707460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.707655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.707863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.707889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.708065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.708240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.708270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.708449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.708628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.708654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.708952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.709237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.709265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.709457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.709634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.709660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.709835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.710023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.710052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.710225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.710397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.710422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.710614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.710784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.710813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.710984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.711178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.711207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.711353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.711565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.711591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.711771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.711941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.711969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.712198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.712375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.712403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.712607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.712736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.712765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.712970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.713163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.713192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.713369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.713527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.713554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.713675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.713855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.713888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.714100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.714307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.714339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.714520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.714718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.714746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.977 qpair failed and we were unable to recover it. 00:30:15.977 [2024-07-10 11:00:32.714889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.977 [2024-07-10 11:00:32.715058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.715084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.715214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.715394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.715420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.715628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.715856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.715907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.716100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.716268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.716302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.716450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.716585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.716610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.716858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.717146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.717172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.717350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.717544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.717571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.717727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.717940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.717966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.718124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.718302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.718331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.718528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.718669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.718709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.718875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.719057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.719105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.719302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.719436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.719477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.719626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.719754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.719781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.719929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.720121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.720150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.720317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.720466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.720505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.720675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.720842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.720871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.721020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.721167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.721194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.721373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.721509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.721539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.721709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.721884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.721912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.722083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.722268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.722294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.722443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.722571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.722611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.722901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.723158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.723211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.723392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.723579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.723620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.723810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.723963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.723989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.724183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.724342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.724369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.724569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.724743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.724769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.724965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.725146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.725173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.725296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.725443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.725470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.725607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.725758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.725784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.725981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.726145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.726174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.726339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.726497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.726524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.726647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.726824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.726865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.727039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.727191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.727217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.727436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.727606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.727634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.727780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.727966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.728000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.728222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.728367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.728394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.728525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.728668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.728694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.728878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.729084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.729110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.729271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.729454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.729495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.729691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.729879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.729905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.730053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.730240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.730266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.730418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.730618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.730644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.730779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.730959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.730985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.731153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.731328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.731354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.731550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.731679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.731715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.731915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.732183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.732230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.732412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.732585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.732612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.732768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.732946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.732989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.733162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.733327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.733353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.733475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.733630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.733656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.733827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.733961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.733989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.978 [2024-07-10 11:00:32.734119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.734309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.978 [2024-07-10 11:00:32.734338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.978 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.734542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.734702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.734730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.734968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.735084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.735110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.735291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.735498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.735567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.735761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.735934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.735962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.736142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.736258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.736284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.736443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.736716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.736746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.736945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.737121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.737147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.737302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.737479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.737508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.737674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.737872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.737901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.738067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.738264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.738292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.738487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.738667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.738693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.738850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.739036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.739064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.739237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.739417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.739468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.739624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.739824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.739852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.740057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.740212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.740238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.740368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.740536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.740565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.740749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.740952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.740980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.741234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.741411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.741455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.741672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.741878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.741907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.742076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.742204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.742232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.742435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.742625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.742653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.742858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.743015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.743041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.743172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.743378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.743408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.743581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.743717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.743746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.743923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.744101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.744143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.744297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.744475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.744502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.744662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.744876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.744902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.745047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.745232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.745257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.745408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.745578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.745604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.745723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.746050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.746115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.746333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.746492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.746534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.746740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.746892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.746919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.747095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.747244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.747286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.747441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.747627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.747653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.747808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.747959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.747985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.748156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.748350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.748379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.748585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.748780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.748832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.749001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.749198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.749227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.749356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.749525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.749554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.749708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.749884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.749913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.750085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.750255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.750297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.750529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.750719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.750748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.750952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.751117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.751161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.751295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.751467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.751496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.751698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.751825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.751869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.979 qpair failed and we were unable to recover it. 00:30:15.979 [2024-07-10 11:00:32.752005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.979 [2024-07-10 11:00:32.752196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.752224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.752395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.752574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.752602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.752774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.752939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.752967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.753115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.753294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.753338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.753543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.753714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.753740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.753938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.754128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.754154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.754304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.754470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.754500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.754675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.754834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.754860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.755111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.755329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.755364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.755516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.755672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.755707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.755884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.756054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.756083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.756277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.756480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.756510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.756649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.756883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.756917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.757093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.757243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.757285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.757475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.757629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.757658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.757842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.758010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.758039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.758200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.758392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.758421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.758629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.758832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.758871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.759023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.759187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.759235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.759387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.759551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.759601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.759820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.760140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.760192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.760348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.760528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.760555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.760695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.760914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.760940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.761093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.761230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.761274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:15.980 [2024-07-10 11:00:32.761500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.761676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.980 [2024-07-10 11:00:32.761707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:15.980 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.761872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.761995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.762022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.762153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.762286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.762312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.762519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.762704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.762743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.762955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.763101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.763140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.763336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.763517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.763552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.763705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.763902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.763938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.764146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.764304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.764339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.764517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.764692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.764722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.764907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.765074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.765120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.765326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.765505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.765532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.765684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.765881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.765924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.766104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.766280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.766308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.766510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.766704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.766748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.766901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.767118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.767161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.767318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.767448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.767475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.767646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.767822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.767867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.768025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.768199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.768226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.768355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.768493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.768521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.768703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.768893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.768936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.769122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.769318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.769344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.769481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.769664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.769693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.769901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.770079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.770122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.770256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.770382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.770408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.770577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.770750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.770793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.254 qpair failed and we were unable to recover it. 00:30:16.254 [2024-07-10 11:00:32.770995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.771182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.254 [2024-07-10 11:00:32.771226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.771405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.771554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.771600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.771769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.771930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.771973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.772180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.772328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.772355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.772551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.772740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.772784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.772937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.773106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.773150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.773330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.773514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.773560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.773689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.773865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.773911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.774060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.774204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.774231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.774390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.774556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.774601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.774770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.774971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.775015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.775167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.775327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.775353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.775542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.775690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.775735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.775884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.776063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.776089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.776250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.776393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.776420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.776606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.776801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.776845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.777027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.777219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.777245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.777392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.777591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.777620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.777802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.777971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.778015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.255 qpair failed and we were unable to recover it. 00:30:16.255 [2024-07-10 11:00:32.778148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.255 [2024-07-10 11:00:32.778277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.778317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.778472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.778661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.778708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.778855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.779048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.779092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.779253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.779405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.779436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.779608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.779794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.779838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.779987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.780148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.780193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.780320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.780494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.780539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.780662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.780790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.780816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.780971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.781146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.781172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.781299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.781444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.781472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.781625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.781819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.781865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.782008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.782211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.782238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.782388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.782574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.782619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.782770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.782956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.783001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.783156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.783308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.783334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.783489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.783686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.783729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.783909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.784102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.784147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.784277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.784476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.256 [2024-07-10 11:00:32.784520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.256 qpair failed and we were unable to recover it. 00:30:16.256 [2024-07-10 11:00:32.784729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.784949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.784994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.785172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.785305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.785332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.785501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.785689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.785732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.785914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.786054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.786081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.786261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.786423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.786456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.786611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.786776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.786820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.787001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.787200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.787243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.787422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.787594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.787638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.787812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.788004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.788032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.788200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.788378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.788404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.788563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.788750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.788794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.788979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.789186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.789215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.789369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.789541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.789586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.789775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.789964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.790009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.257 qpair failed and we were unable to recover it. 00:30:16.257 [2024-07-10 11:00:32.790171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.257 [2024-07-10 11:00:32.790304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.790330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.790477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.790671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.790715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.790865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.791067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.791094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.791217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.791377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.791404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.791562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.791759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.791803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.791973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.792118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.792153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.792283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.792486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.792534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.792691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.792855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.792899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.793042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.793220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.793247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.793406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.793600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.793649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.793836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.793999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.794043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.794176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.794302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.794330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.794493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.794682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.794727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.794944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.795114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.795140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.795296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.795421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.795452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.258 qpair failed and we were unable to recover it. 00:30:16.258 [2024-07-10 11:00:32.795607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.258 [2024-07-10 11:00:32.795797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.795840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-07-10 11:00:32.796011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.796152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.796180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-07-10 11:00:32.796330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.796529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.796575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-07-10 11:00:32.796753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.796968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.796997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-07-10 11:00:32.797171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.797293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.797323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-07-10 11:00:32.797498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.797720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.797768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-07-10 11:00:32.797940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.798103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.798139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.259 qpair failed and we were unable to recover it. 00:30:16.259 [2024-07-10 11:00:32.798261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.259 [2024-07-10 11:00:32.798439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.798466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.798621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.798811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.798855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.799022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.799180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.799207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.799363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.799538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.799568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.799750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.799942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.799987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.800116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.800282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.800308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.800485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.800655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.800706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.800851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.801011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.801069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.801223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.801368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.801394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.801584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.801747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.801797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.802005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.802148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.802176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.802323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.802503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.802546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.802752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.802946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.802989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.803148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.803304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.803331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.260 qpair failed and we were unable to recover it. 00:30:16.260 [2024-07-10 11:00:32.803506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.803645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.260 [2024-07-10 11:00:32.803682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.803844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.803966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.803992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.804138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.804265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.804293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.804478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.804687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.804735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.804917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.805109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.805135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.805281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.805417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.805458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.805608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.805798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.805828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.806022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.806194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.806220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.806345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.806519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.806564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.806736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.806906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.806949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.807101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.807262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.807291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.807446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.807595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.807640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.807849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.807991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.808017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.808174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.808328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.808359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.808538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.808703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.808732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.808920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.809063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.809090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.261 qpair failed and we were unable to recover it. 00:30:16.261 [2024-07-10 11:00:32.809215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.261 [2024-07-10 11:00:32.809339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.809367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-07-10 11:00:32.809533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.809707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.809752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-07-10 11:00:32.809924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.810097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.810123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-07-10 11:00:32.810271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.810394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.810422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-07-10 11:00:32.810612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.810804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.810847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-07-10 11:00:32.810971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.811130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.811157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-07-10 11:00:32.811314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.811489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.811519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-07-10 11:00:32.811692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.811881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.811925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-07-10 11:00:32.812083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.812210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.812236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.262 [2024-07-10 11:00:32.812396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.812554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.262 [2024-07-10 11:00:32.812598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.262 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.812752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.812922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.812965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.813117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.813271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.813297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.813452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.813592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.813637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.813891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.814061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.814088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.814269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.814418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.814468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.814620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.814819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.814863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.815005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.815154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.815180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.815358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.815531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.815560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.815729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.815949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.815997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.816178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.816332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.816359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.816512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.816714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.816759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.816903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.817104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.817131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.817292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.817455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.817482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.817663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.817851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.817896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.818019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.818174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.818200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.818333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.818461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.818490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.818639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.818825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.818855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.263 qpair failed and we were unable to recover it. 00:30:16.263 [2024-07-10 11:00:32.819000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.819125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.263 [2024-07-10 11:00:32.819152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.819292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.819450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.819478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.819633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.819808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.819852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.820007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.820168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.820194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.820347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.820547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.820592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.820773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.820938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.820985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.821134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.821292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.821319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.821497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.821685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.821714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.821938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.822105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.822132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.822260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.822416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.822449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.822619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.822815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.822860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.823049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.823215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.823242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.823376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.823525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.823555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.823747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.823961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.824006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.824138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.824276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.824303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.824477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.824640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.824683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.824857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.825004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.825032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.825154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.825276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.825304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.264 [2024-07-10 11:00:32.825492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.825683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.264 [2024-07-10 11:00:32.825726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.264 qpair failed and we were unable to recover it. 00:30:16.265 [2024-07-10 11:00:32.825909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.826065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.826092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-07-10 11:00:32.826245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.826409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.826443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-07-10 11:00:32.826630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.826825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.826869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-07-10 11:00:32.827028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.827206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.827234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-07-10 11:00:32.827366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.827517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.827562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-07-10 11:00:32.827703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.827962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.828004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-07-10 11:00:32.828134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.828321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.828348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-07-10 11:00:32.828516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.828739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.828786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-07-10 11:00:32.828956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.829125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.829151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-07-10 11:00:32.829328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.829499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.265 [2024-07-10 11:00:32.829529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.265 qpair failed and we were unable to recover it. 00:30:16.265 [2024-07-10 11:00:32.829716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.829910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.829954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.830129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.830302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.830328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.830545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.830742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.830784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.830943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.831139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.831181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.831316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.831448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.831475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.831634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.831792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.831834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.832016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.832183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.832208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.832363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.832536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.832580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.832735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.832900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.832942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.833115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.833268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.833293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.833414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.833591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.833635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.833850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.834009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.834055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.834212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.834366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.834392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.834557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.834749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.834793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.834974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.835170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.835196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.835344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.835519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.835564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.835768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.835938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.835981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.836132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.836309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.836336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.836514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.836682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.836726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.836891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.837048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.837075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.837200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.837355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.837381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.837570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.837738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.837767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.837966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.838215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.838251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.838413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.838614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.838656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.838863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.839033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.839062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.839232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.839377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.839407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.839598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.839729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.839758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.839932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.840103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.840135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.840274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.840446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.840491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.266 [2024-07-10 11:00:32.840682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.840842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.266 [2024-07-10 11:00:32.840871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.266 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.841015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.841191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.841223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.841365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.841608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.841636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.841795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.841968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.841994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.842161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.842336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.842361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.842504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.842650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.842689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.842859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.843007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.843033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.843218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.843364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.843388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.843562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.843737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.843775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.843911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.844073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.844100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.844280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.844434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.844460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.844609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.844783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.844810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.845009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.845152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.845179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.845312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.845473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.845499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.845652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.845810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.845839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.846012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.846180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.846209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.846418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.846585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.846612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.846791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.846942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.846984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.847112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.847250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.847278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.847484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.847620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.847646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.847862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.847987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.848014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.848173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.848310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.848340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.848521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.848655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.848688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.848882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.849064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.849094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.849260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.849398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.849435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.849588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.849716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.849743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.849884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.853439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.853495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.853655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.853829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.853860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.854017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.854188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.854217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.854382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.854577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.854607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.854824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.855025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.855055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.855296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.855488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.855518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.855652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.855788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.855815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.855977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.856161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.856195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.856359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.856533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.856563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.856724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.856951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.856982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.267 qpair failed and we were unable to recover it. 00:30:16.267 [2024-07-10 11:00:32.857194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.857377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.267 [2024-07-10 11:00:32.857405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.857564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.857722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.857768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.857937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.858174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.858205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.858389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.858566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.858594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.858809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.859031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.859061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.859307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.859513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.859543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.859704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.859945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.859974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.860197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.860340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.860373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.860629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.860843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.860874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.861095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.861257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.861303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.861507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.861649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.861688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.861896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.863438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.863496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.863695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.863891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.863921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.864101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.864389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.864420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.864625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.864858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.864887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.865095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.865319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.865350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.865549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.865725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.865755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.865970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.866191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.866226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.866447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.866640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.866667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.868438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.868666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.868696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.868923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.869128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.869156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.268 qpair failed and we were unable to recover it. 00:30:16.268 [2024-07-10 11:00:32.869383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.268 [2024-07-10 11:00:32.869586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.869617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.869830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.870009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.870037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.870236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.870439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.870477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.870647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.870833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.870862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.871043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.871282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.871309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.871601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.871840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.871875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.872027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.872243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.872280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.872479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.872621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.872657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.872919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.873102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.873144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.873409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.873567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.873604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.873849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.874079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.874129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.874448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.874658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.874720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.874914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.875186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.875241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.875441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.875679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.875729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.875962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.876168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.876205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.876376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.876583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.876621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.876843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.876991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.877027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.877185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.877355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.877393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.877595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.877809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.877842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.878025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.878269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.878299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.878495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.878661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.878703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.878884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.879019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.879047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.879207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.879356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.879385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.879568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.879723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.879768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.879948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.880080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.880107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.880292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.880439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.880475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.880601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.880815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.880843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.881054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.881212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.881237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.269 [2024-07-10 11:00:32.881412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.881578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.269 [2024-07-10 11:00:32.881604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.269 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.881754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.881952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.881982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.882215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.882360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.882403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.882632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.882834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.882862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.883038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.883175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.883201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.883367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.883487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.883513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.883691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.883895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.883943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.884177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.884324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.884351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.884536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.884655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.884680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.884892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.885025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.885051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.885239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.885403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.885439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.885627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.885855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.885881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.886065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.886241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.886270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.886481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.886661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.886699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.886881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.887057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.887083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.887258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.887455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.887494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.887698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.887851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.887877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.888035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.888232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.888259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.888414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.888581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.888606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.888782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.888980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.889009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.889186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.889355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.889386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.889571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.889691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.889717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.889904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.890063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.890090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.890318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.890479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.890505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.890637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.890824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.890850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.891045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.891203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.891232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.891365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.891543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.891571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.891704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.891853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.891880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.892060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.892205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.892248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.892430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.892614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.892640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.892812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.892989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.893015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.893216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.893357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.893402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.893611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.893780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.893809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.270 [2024-07-10 11:00:32.893961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.894146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.270 [2024-07-10 11:00:32.894176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.270 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.894405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.894595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.894622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.894778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.894913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.894941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.895098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.895250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.895276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.895498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.895692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.895721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.895929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.896122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.896167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.896354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.896509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.896560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.896756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.896989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.897019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.897216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.897397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.897423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.897601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.897792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.897835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.898014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.898184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.898212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.898366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.898608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.898636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.898806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.898953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.899000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.899183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.899375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.899402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.899585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.899754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.899797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.899997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.900146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.900174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.900333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.900537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.900584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.900736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.900928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.900972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.901141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.901261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.901287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.901446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.901645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.901688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.901863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.902036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.902069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.902251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.902379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.902406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.902612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.902800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.902842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.903028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.903227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.903255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.903436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.903589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.903617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.903777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.903950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.903978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.904163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.904313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.904345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.904528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.904661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.904690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.904848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.905024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.905051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.905231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.905384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.905410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.905597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.905722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.905750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.905909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.906143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.906169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.271 qpair failed and we were unable to recover it. 00:30:16.271 [2024-07-10 11:00:32.906350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.271 [2024-07-10 11:00:32.906532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.906559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.906724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.906883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.906910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.907069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.907249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.907276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.907512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.907685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.907712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.907946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.908123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.908150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.908285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.908466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.908501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.908699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.908824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.908850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.909000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.909157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.909184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.909409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.909598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.909625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.909789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.909974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.910000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.910132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.910351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.910377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.910542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.910674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.910701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.910850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.911004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.911032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.911244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.911364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.911390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.911589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.911769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.911809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.911954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.912157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.912184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.912410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.912602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.912629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.912765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.912915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.912944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.913148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.913309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.913336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.913484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.913638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.913665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.913919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.914143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.914169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.914328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.914489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.914515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.914683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.914845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.272 [2024-07-10 11:00:32.914871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.272 qpair failed and we were unable to recover it. 00:30:16.272 [2024-07-10 11:00:32.915054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.915210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.915237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.915455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.915586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.915613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.915807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.915962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.915990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.916192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.916330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.916355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.916551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.916704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.916731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.916904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.917135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.917160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.917393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.917597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.917624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.917804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.918050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.918091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.918253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.918442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.918484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.918651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.918898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.918924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.919142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.919337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.919364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.919550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.919798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.919824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.920058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.920242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.920281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.920475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.920710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.920751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.920913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.921071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.921112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.921278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.921506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.921534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.921832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.922078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.922104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.922261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.922418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.922451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.922582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.922819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.922860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.923023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.923155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.923182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.923393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.923577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.923604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.923768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.923925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.923966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.924154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.924318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.924358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.924552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.924696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.924722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.924892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.925045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.925071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.925203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.925354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.925380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.925523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.925712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.925738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.925896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.926065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.926091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.926280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.926516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.926543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.926689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.926918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.926945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.927166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.927357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.927383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.927586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.927733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.927759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.927943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.928104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.928146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.928412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.928612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.928640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.928820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.928979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.929023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.929179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.929382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.929409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.929581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.929730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.929771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.929943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.930112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.930138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.930311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.930499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.930526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.930685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.930864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.930890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.931016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.931167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.931195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.931417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.931623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.931650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.931842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.932098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.273 [2024-07-10 11:00:32.932139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.273 qpair failed and we were unable to recover it. 00:30:16.273 [2024-07-10 11:00:32.932328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.932481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.932509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.932632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.932783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.932811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.932990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.933143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.933171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.933336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.933526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.933554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.933680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.933818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.933844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.934087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.934239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.934266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.934417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.934578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.934626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.934811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.934996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.935039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.935199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.935348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.935374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.935538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.935724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.935750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.935930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.936091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.936131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.936331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.936522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.936549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.936704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.936835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.936861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.937049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.937210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.937237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.937418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.937585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.937612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.937770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.937953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.937995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.938159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.938335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.938362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.938508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.938672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.938698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.938879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.939091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.939118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.939295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.939475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.939502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.939692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.939923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.939950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.940106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.940285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.940312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.940473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.940629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.940655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.940832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.941031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.941058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.941204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.941434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.941475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.941633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.941797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.941823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.941981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.942111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.942153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.942356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.942546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.942574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.942732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.942910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.942936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.943072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.943208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.943235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.943436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.943609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.943636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.943864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.944016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.944042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.944225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.944482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.944508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.944754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.944953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.944979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.945111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.945305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.945331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.945543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.945743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.945769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.945900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.946100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.946139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.946301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.946536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.946561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.946800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.946989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.947015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.274 qpair failed and we were unable to recover it. 00:30:16.274 [2024-07-10 11:00:32.947183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.274 [2024-07-10 11:00:32.947372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.947411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.947561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.947817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.947842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.947991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.948164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.948191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.948430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.948635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.948662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.948826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.948991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.949017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.949193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.949344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.949369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.949629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.949853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.949879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.950048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.950190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.950218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.950407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.950572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.950614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.950739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.950901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.950927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.951062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.951246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.951272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.951433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.951563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.951591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.951741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.951928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.951968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.952142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.952265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.952291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.952527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.952704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.952755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.952953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.953157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.953184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.953360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.953525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.953553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.953801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.953974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.954001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.954162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.954342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.954368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.954523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.954681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.954708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.954968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.955200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.955231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.955429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.955581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.955608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.955773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.955954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.955981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.956133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.956264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.956290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.956519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.956638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.956665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.956830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.957060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.957086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.957206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.957369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.957396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.957585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.957733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.957759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.957980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.958146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.958188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.958367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.958517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.958554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.958807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.959014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.959047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.959221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.959409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.959440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.959623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.959787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.959833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.960015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.960173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.960201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.960409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.960591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.960619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.960804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.960938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.960982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.961180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.961337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.961363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.961518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.961669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.961695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.961847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.961997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.962023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.962204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.962336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.962362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.962520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.962649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.275 [2024-07-10 11:00:32.962679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-07-10 11:00:32.962946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.963095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.963121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.963401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.963551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.963577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.963757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.963905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.963931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.964101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.966603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.966645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.966826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.967005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.967032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.967164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.967335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.967362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.967558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.967713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.967755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.967907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.968067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.968094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.968347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.968515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.968543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.968712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.968863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.968891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.969115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.969322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.969362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.969575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.969742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.969768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.969949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.970221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.970247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.970402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.970566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.970592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.970816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.970976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.971018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.971251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.971467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.971493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.971617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.971800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.971827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.972002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.972173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.972200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.972489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.972671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.972697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.972985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.973193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.973219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.973373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.973536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.973565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.973713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.973976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.974017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.974170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.974359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.974385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.974623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.974773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.974799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.974977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.975204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.975230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.975439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.975610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.975637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-07-10 11:00:32.975837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.276 [2024-07-10 11:00:32.975954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.975980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.976191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.976375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.976402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.976526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.976652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.976678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.976836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.977061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.977087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.977338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.977514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.977542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.977691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.977866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.977893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.978113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.978325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.978352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.978569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.978745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.978785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.978979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.979201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.979227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.979411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.979594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.979621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.979784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.979944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.979971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.980157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.980396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.980443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.980626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.980783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.980810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.980960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.981109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.981137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.981365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.981507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.981534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.981713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.981885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.981929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.982054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.982240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.982283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.982476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.982612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.982639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.982819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.982994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.983035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.983209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.983459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.983487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.983639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.983797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.983824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.983981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.984131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.984158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.984282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.984436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.984479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.984648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.984838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.984880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.985029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.985211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.985238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.985451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.985698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.985740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.985953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.986216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.986241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.986404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.986545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.986574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.986745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.986901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.986928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.987131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.987350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.987376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.987541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.987722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.987749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.987897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.988079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.988120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.988245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.988412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.988445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.988582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.988745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.988771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.988966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.989177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.989203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.989343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.989572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.277 [2024-07-10 11:00:32.989599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.277 qpair failed and we were unable to recover it. 00:30:16.277 [2024-07-10 11:00:32.989732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.989991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.990018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.990205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.990445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.990472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.990642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.990819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.990858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.991048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.991221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.991245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.991410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.991589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.991627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.991825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.991986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.992009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.992189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.992324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.992349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.992528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.992771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.992794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.992953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.993151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.993174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.993327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.993479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.993504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.993684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.993842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.993881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.994072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.994218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.994243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.994400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.994539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.994564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.994800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.994980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.995006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.995162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.995313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.995338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.995493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.995648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.995672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.995830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.995985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.996010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.996140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.996279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.996304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.996454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.996621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.996647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.996801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.996956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.996981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.997140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.997258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.997282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.997447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.997602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.997627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.997764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.997997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.998022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.998151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.998305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.998331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.998465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.998621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.998646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.998801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.998924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.998950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.999098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.999253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.999277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.999431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.999557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.999581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:32.999732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.999856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:32.999881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:33.000005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.000163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.000188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:33.000347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.000496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.000533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:33.000668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.000794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.000828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:33.000985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.001166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.001193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:33.001319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.001479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.001506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:33.001642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.001775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.001803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:33.001968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.002120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.278 [2024-07-10 11:00:33.002146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.278 qpair failed and we were unable to recover it. 00:30:16.278 [2024-07-10 11:00:33.002334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.002505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.002542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.280 qpair failed and we were unable to recover it. 00:30:16.280 [2024-07-10 11:00:33.002682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.002842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.002870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.280 qpair failed and we were unable to recover it. 00:30:16.280 [2024-07-10 11:00:33.003005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.003169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.003196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.280 qpair failed and we were unable to recover it. 00:30:16.280 [2024-07-10 11:00:33.003353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.003507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.003535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.280 qpair failed and we were unable to recover it. 00:30:16.280 [2024-07-10 11:00:33.003667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.003823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.003850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.280 qpair failed and we were unable to recover it. 00:30:16.280 [2024-07-10 11:00:33.003988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.004136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.280 [2024-07-10 11:00:33.004162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.280 qpair failed and we were unable to recover it. 00:30:16.280 [2024-07-10 11:00:33.004311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.004443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.004470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.004603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.004731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.004758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.004912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.005053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.005079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.005241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.005403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.005445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.005584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.005766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.005793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.005947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.006101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.006127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.006274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.006439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.006467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.006600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.006754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.006780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.006946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.007111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.007138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.007375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.007542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.007569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.007703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.007866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.007892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.008075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.008197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.008223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.008384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.008531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.008558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.008688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.008858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.008884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.009045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.009175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.009201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.009365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.009504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.009532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.009708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.009870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.009896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.010018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.010152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.010178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.010333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.010486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.010513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.010641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.010759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.010785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.010923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.011076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.011102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.011285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.011443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.011472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.011607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.011732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.011760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.011941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.012094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.012125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.281 qpair failed and we were unable to recover it. 00:30:16.281 [2024-07-10 11:00:33.012306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.012437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.281 [2024-07-10 11:00:33.012463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.282 qpair failed and we were unable to recover it. 00:30:16.282 [2024-07-10 11:00:33.012627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.282 [2024-07-10 11:00:33.012783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.282 [2024-07-10 11:00:33.012809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.282 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.012970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.013126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.013157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.013287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.013449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.013487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.013640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.013800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.013828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.014011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.014165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.014191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.014320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.014470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.014506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.014635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.014793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.014819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.014974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.015133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.015161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.015288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.015525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.015552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.015709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.015844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.015871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.015998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.016143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.016170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.016303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.016447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.016489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.016614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.016745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.016773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.016905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.017055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.017081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.017258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.017379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.017406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.017596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.017721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.017747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.017929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.018084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.018112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.018271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.018430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.018458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.018615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.018769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.018806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.018961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.019140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.019167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.019286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.019412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.019444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.019683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.019822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.019853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.020018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.020199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.020226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.020358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.020513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.020540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.020677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.020845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.020872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.021014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.021172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.021198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.021330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.021491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.021518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.021648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.021807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.021833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.022019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.022141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.022168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.022325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.022506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.022533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.022662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.022828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.022854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.023018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.023144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.023174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.023311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.023462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.023495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.023628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.023766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.023793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.023951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.024103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.024129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.024310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.024439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.024466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.024599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.024764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.024791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.024943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.025102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.025128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.025259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.025406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.025438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.025556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.025794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.025820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.026057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.026215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.026241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.283 qpair failed and we were unable to recover it. 00:30:16.283 [2024-07-10 11:00:33.026378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.026518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.283 [2024-07-10 11:00:33.026544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.026678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.026841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.026869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.027027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.027182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.027209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.027366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.027485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.027512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.027638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.027805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.027832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.027952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.028185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.028211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.028387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.028535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.028561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.028698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.028872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.028899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.029075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.029258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.029284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.029455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.029699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.029725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.029911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.030096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.030122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.030290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.030463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.030492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.030628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.030754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.030782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.030938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.031096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.031123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.031280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.031430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.031457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.031610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.031765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.031791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.031917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.032047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.032073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.032241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.032421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.032452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.032604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.032764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.032791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.032947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.033104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.033131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.033277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.033458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.033487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.033647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.033806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.033833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.033991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.034140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.034166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.284 qpair failed and we were unable to recover it. 00:30:16.284 [2024-07-10 11:00:33.034314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.284 [2024-07-10 11:00:33.034494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.034520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.034652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.034813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.034839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.034996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.035155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.035182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.035340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.035470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.035496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.035672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.035862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.035887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.036017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.036161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.036187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.036364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.036492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.036519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.036679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.036868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.036894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.037023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.037147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.037175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.037355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.037484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.037511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.037641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.037769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.037795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.037919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.038064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.038090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.038222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.038380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.038407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.038546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.038681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.038711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.038836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.038980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.039007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.039161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.039309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.039336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.039508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.039670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.039708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.039857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.039988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.040017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.040139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.040267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.040294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.040481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.040642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.040668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.040805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.040959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.040985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.041146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.041299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.041326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.041468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.041595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.041620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.041783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.041908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.041935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.042106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.042252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.042278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.042434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.042569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.042595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.042767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.042923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.042951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.043106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.043249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.043276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.043476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.043657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.043695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.043852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.044005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.044032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.044212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.044344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.044371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.044501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.044745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.044772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.044899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.045035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.045061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.285 [2024-07-10 11:00:33.045219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.045369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.285 [2024-07-10 11:00:33.045396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.285 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.045610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.045761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.045788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.045912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.046037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.046063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.046231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.046377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.046403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.046547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.046701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.046727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.046887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.047043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.047069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.047304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.047463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.047492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.047638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.047793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.047818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.047967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.048099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.048125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.048250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.048420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.048452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.048623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.048799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.048826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.048962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.049118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.049144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.049300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.049428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.049455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.049585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.049746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.049774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.049928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.050107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.050134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.050283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.050438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.050466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.050619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.050771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.050798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.050978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.051100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.051127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.051282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.051437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.051475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.051607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.051750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.051776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.051935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.052105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.052131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.052315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.052488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.052515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.052678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.052811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.052839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.053020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.053138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.053164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.053317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.053452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.053490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.053649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.053842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.053869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.054048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.054206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.054233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.054390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.054553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.054581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.054764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.054917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.054943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.055124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.055281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.055309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.055446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.055592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.055618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.055765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.055924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.055951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.286 [2024-07-10 11:00:33.056108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.056286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.286 [2024-07-10 11:00:33.056312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.286 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.056472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.056601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.056627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.056806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.056933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.056959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.057084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.057275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.057302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.057480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.057715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.057742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.057927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.058091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.058118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.058274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.058433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.058461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.058583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.058816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.058842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.059027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.059182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.059210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.059362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.059494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.059520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.059677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.059860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.059887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.060067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.060252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.060278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.060441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.060632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.060658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.060817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.060952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.060978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.061114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.061298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.061325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.061561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.061748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.061775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.061934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.062094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.062121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.062248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.062383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.287 [2024-07-10 11:00:33.062410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.287 qpair failed and we were unable to recover it. 00:30:16.287 [2024-07-10 11:00:33.062577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-07-10 11:00:33.062732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-07-10 11:00:33.062758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-07-10 11:00:33.062910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-07-10 11:00:33.063087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-07-10 11:00:33.063113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-07-10 11:00:33.063250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-07-10 11:00:33.063370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-07-10 11:00:33.063397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-07-10 11:00:33.063528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-07-10 11:00:33.063678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-07-10 11:00:33.063705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-07-10 11:00:33.063864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-07-10 11:00:33.064034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-07-10 11:00:33.064060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.566 qpair failed and we were unable to recover it. 00:30:16.566 [2024-07-10 11:00:33.064184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.566 [2024-07-10 11:00:33.064315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.064341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.064492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.064631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.064657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.064836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.064992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.065019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.065184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.065356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.065382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.065537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.065719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.065745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.065876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.066031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.066059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.066215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.066371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.066398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.066539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.066722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.066748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.066886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.067066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.067093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.067247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.067434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.067461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.067615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.067782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.067812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.067968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.068116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.068142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.068325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.068492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.068519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.068661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.068848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.068874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.069029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.069186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.069212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.069348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.069518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.069546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.069694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.069853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.069879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.070013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.070165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.070192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.070338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.070501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.070528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.070676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.070836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.070863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.071044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.071201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.071231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.071403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.071533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.071560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.071746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.071926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.071953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.072135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.072288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.567 [2024-07-10 11:00:33.072314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.567 qpair failed and we were unable to recover it. 00:30:16.567 [2024-07-10 11:00:33.072502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.072673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.072700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.072882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.073064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.073091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.073242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.073428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.073455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.073617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.073749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.073775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.073911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.074095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.074122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.074255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.074445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.074471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.074606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.074789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.074820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.075000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.075179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.075206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.075334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.075459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.075486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.075644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.075797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.075824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.076004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.076187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.076213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.076368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.076525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.076564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.076748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.076901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.076927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.077109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.077280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.077305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.077464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.077644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.077670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.077831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.077950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.077978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.078158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.078310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.078340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.078496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.078627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.078654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.078808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.078963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.078990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.079170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.079314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.079340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.079500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.079657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.079683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.079862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.080046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.080073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.080233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.080349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.080374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.080530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.080652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.080689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.080844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.080986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.081013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.081164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.081340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.081367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.568 [2024-07-10 11:00:33.081527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.081683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.568 [2024-07-10 11:00:33.081717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.568 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.081932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.082090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.082116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.082273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.082399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.082431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.082588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.082748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.082775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.082930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.083105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.083131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.083289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.083468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.083500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.083683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.083831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.083869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.084030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.084161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.084186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.084339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.084492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.084519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.084675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.084806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.084833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.084990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.085146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.085173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.085368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.085501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.085530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.085702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.085860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.085886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.086045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.086236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.086261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.086415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.086573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.086600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.086782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.086938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.086964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.087146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.087293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.087319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.087472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.087600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.087627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.087782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.087903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.087929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.088055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.088178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.088205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.088359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.088547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.088573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.088738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.088892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.088921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.089107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.089240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.089265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.089395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.089562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.089589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.569 qpair failed and we were unable to recover it. 00:30:16.569 [2024-07-10 11:00:33.089773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.569 [2024-07-10 11:00:33.089946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.089972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.090128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.090252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.090277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.090437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.090623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.090649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.090805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.090958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.090984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.091162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.091314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.091340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.091494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.091650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.091677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.091856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.092020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.092046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.092228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.092362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.092388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.092528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.092686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.092713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.092839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.092968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.092994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.093150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.093295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.093320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.093503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.093666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.093693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.093871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.094050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.094075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.094209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.094391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.094416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.094561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.094743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.094771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.094927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.095109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.095135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.095317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.095473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.095507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.095641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.095800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.095827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.095983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.096164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.096190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.096370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.096562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.096590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.096722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.096917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.096943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.097126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.097311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.097338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.097475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.097631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.097658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.097815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.097990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.098016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.098169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.098323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.098351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.098504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.098637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.098663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.570 qpair failed and we were unable to recover it. 00:30:16.570 [2024-07-10 11:00:33.098788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.570 [2024-07-10 11:00:33.098967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.098993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.099145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.099278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.099305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.099494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.099730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.099757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.100033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.100218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.100244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.100499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.100679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.100706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.100879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.101115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.101141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.101304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.101482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.101508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.101669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.101823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.101848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.102016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.102201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.102229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.102435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.102619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.102644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.102834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.103084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.103126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.103327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.103488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.103515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.103696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.103843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.103870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.104020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.104184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.104227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.104416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.104592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.104619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.104769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.104929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.104956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.105184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.105343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.105386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.105585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.105775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.105801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.106016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.106172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.106199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.106437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.106616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.106643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.106792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.106946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.106975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.107216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.107387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.107417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.571 qpair failed and we were unable to recover it. 00:30:16.571 [2024-07-10 11:00:33.107584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.107740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.571 [2024-07-10 11:00:33.107767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.107934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.108163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.108190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.108402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.108565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.108592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.108755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.108941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.108981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.109170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.109359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.109385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.109579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.109704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.109731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.109910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.110064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.110091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.110225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.110378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.110405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.110598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.110779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.110806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.110985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.111107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.111133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.111287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.111450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.111477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.111635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.111783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.111810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.111964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.112124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.112152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.112361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.112554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.112582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.112741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.112881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.112908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.113059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.113244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.113271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.113419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.113555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.113582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.113767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.113922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.113964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.114126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.114253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.114279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.114434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.114578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.114605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.114761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.114942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.114969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.115099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.115259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.115286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.115417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.115669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.115696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.115880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.116011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.572 [2024-07-10 11:00:33.116037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.572 qpair failed and we were unable to recover it. 00:30:16.572 [2024-07-10 11:00:33.116166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.116400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.116432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.116609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.116737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.116764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.116948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.117109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.117136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.117300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.117468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.117496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.117651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.117807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.117834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.118012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.118170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.118197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.118353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.118507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.118534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.118728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.118906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.118933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.119085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.119264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.119290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.119451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.119571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.119597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.119749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.119928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.119955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.120117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.120312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.120339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.120524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.120707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.120734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.120869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.121031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.121057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.121205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.121360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.121387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.121550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.121706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.121734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.121915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.122097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.122124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.122281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.122408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.122439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.122626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.122794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.122822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.122956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.123130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.123157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.123308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.123441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.123468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.123595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.123725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.123752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.123908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.124055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.124082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.124263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.124386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.124413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.124568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.124770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.124797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.124980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.125126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.125156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.573 [2024-07-10 11:00:33.125313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.125442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.573 [2024-07-10 11:00:33.125469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.573 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.125647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.125831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.125857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.126009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.126191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.126217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.126372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.126489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.126516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.126694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.126849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.126876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.127056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.127212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.127241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.127369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.127519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.127547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.127702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.127850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.127877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.128057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.128237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.128263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.128422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.128570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.128601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.128728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.128883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.128910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.129087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.129217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.129244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.129377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.129523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.129551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.129737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.129867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.129894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.130044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.130199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.130225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.130342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.130499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.130526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.130683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.130868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.130895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.131072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.131226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.131253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.131408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.131579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.131608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.131773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.131924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.131955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.132089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.132244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.132271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.132431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.132586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.132614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.132792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.132940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.132967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.133088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.133237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.133265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.133398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.133565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.133592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.133716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.133899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.133927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.134107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.134230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.134257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.574 qpair failed and we were unable to recover it. 00:30:16.574 [2024-07-10 11:00:33.134418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.134587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.574 [2024-07-10 11:00:33.134615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.134799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.134950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.134977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.135104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.135256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.135283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.135469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.135616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.135643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.135773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.135927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.135954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.136080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.136207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.136234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.136362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.136547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.136574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.136719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.136873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.136899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.137048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.137177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.137203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.137360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.137516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.137544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.137705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.137859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.137886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.138038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.138159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.138185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.138338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.138467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.138494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.138657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.138806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.138833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.139008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.139151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.139177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.139329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.139486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.139513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.139692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.139814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.139841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.139996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.140121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.140147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.140304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.140505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.140532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.140686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.140863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.140889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.141015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.141166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.141193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.141344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.141505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.141534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.141676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.141864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.141890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.142075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.142260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.142287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.142468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.142630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.142657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.142838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.142991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.143019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.143173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.143330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.143357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.575 qpair failed and we were unable to recover it. 00:30:16.575 [2024-07-10 11:00:33.143538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.575 [2024-07-10 11:00:33.143694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.143720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.143877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.144034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.144060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.144182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.144365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.144392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.144555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.144688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.144716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.144870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.145025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.145052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.145213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.145360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.145386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.145527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.145722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.145749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.145876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.146054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.146080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.146234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.146435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.146462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.146623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.146748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.146775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.146931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.147107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.147134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.147275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.147434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.147461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.147619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.147804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.147831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.148012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.148169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.148196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.148380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.148537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.148564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.148757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.148949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.148976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.149110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.149238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.149263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.149414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.149585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.149614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.149767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.149947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.149974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.150130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.150308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.150336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.150519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.150663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.150690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.150827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.150981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.151008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.151169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.151350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.151377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.576 qpair failed and we were unable to recover it. 00:30:16.576 [2024-07-10 11:00:33.151537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.151715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.576 [2024-07-10 11:00:33.151743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.151930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.152086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.152113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.152246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.152395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.152421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.152614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.152794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.152821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.152943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.153124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.153151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.153332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.153479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.153506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.153657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.153836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.153862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.154016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.154139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.154167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.154326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.154480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.154507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.154685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.154870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.154896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.155021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.155173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.155199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.155345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.155499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.155526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.155706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.155890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.155917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.156078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.156209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.156236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.156419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.156573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.156599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.156726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.156913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.156940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.157089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.157243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.157269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.157439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.157591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.157619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.157783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.157948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.157976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.158125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.158295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.158322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.158475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.158651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.158678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.158813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.158966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.158994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.159147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.159295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.159321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.159458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.159610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.159637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.159819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.159976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.160003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.160178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.160334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.160361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.160513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.160642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.577 [2024-07-10 11:00:33.160671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.577 qpair failed and we were unable to recover it. 00:30:16.577 [2024-07-10 11:00:33.160840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.160994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.161020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.161152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.161305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.161332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.161467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.161649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.161676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.161857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.161992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.162019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.162158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.162315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.162342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.162521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.162675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.162702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.162884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.163049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.163076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.163234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.163392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.163418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.163574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.163725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.163752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.163910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.164088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.164115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.164270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.164432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.164459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.164617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.164748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.164776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.164925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.165102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.165129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.165304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.165477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.165505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.165685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.165868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.165894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.166022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.166205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.166232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.166382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.166574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.166601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.166755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.166940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.166967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.167122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.167277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.167303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.167440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.167565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.167603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.167736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.167882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.167909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.168067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.168224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.168251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.168406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.168560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.168587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.168736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.168893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.168920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.169078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.169233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.169260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.169412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.169614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.169641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.578 [2024-07-10 11:00:33.169765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.170067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.578 [2024-07-10 11:00:33.170094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.578 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.170262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.170420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.170453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.170691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.170876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.170903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.171111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.171268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.171295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.171449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.171698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.171724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.171939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.172063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.172090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.172273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.172400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.172433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.172557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.172713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.172740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.172947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.173144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.173172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.173332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.173516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.173543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.173694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.173879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.173906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.174133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.174291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.174318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.174497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.174675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.174701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.174850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.175003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.175029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.175179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.175303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.175330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.175451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.175593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.175620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.175776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.176011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.176036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.176201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.176378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.176405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.176561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.176679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.176706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.176934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.177084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.177111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.177267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.177489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.177521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.177704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.177822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.177849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.177979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.178124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.178150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.178304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.178438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.178464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.178640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.178766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.178794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.178952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.179106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.179132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.179254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.179401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.579 [2024-07-10 11:00:33.179432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.579 qpair failed and we were unable to recover it. 00:30:16.579 [2024-07-10 11:00:33.179619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.179775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.179804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.179963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.180146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.180172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.180307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.180441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.180470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.180629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.180784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.180817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.181003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.181156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.181183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.181360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.181510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.181538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.181715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.181873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.181900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.182057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.182207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.182235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.182413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.182575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.182604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.182787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.182968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.182995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.183146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.183302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.183329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.183454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.183640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.183667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.183820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.183976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.184002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.184150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.184306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.184338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.184510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.184674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.184701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.184864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.184996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.185024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.185181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.185332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.185359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.185518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.185698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.185725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.185883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.186033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.186060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.186240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.186362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.186389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.186527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.186653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.186680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.186822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.186968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.186995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.187153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.187312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.187339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.187462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.187640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.187671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.187856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.188040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.188066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.188221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.188392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.188418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.580 [2024-07-10 11:00:33.188584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.188712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.580 [2024-07-10 11:00:33.188740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.580 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.188926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.189083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.189110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.189291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.189423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.189462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.189647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.189832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.189859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.190041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.190165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.190192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.190340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.190524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.190551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.190707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.190859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.190886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.191018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.191194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.191221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.191380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.191565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.191593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.191741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.191861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.191887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.192067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.192221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.192248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.192404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.192537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.192566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.192718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.192872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.192899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.193028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.193157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.193184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.193337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.193517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.193544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.193703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.193858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.193886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.194040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.194173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.194199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.194346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.194531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.194559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.194744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.194920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.194947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.195105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.195259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.195286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.195437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.195591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.195618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.195797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.195920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.195949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.196104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.196287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.196315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.196476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.196606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.581 [2024-07-10 11:00:33.196633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.581 qpair failed and we were unable to recover it. 00:30:16.581 [2024-07-10 11:00:33.196789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.196946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.196974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.197129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.197285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.197312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.197439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.197594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.197621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.197787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.197942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.197969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.198102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.198229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.198256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.198409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.198542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.198569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.198751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.198933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.198960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.199115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.199240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.199267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.199434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.199565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.199592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.199775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.199901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.199928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.200049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.200199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.200225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.200406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.200560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.200587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.200716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.200864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.200890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.201053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.201211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.201240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.201401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.201544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.201572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.201759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.201943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.201970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.202133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.202288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.202315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.202450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.202575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.202602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.202750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.202927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.202954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.203072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.203249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.203275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.203435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.203593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.203620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.203767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.203883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.203910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.204081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.204237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.204264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.204417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.204604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.204631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.204763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.204920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.204946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.205127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.205283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.205310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.582 qpair failed and we were unable to recover it. 00:30:16.582 [2024-07-10 11:00:33.205464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.205623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.582 [2024-07-10 11:00:33.205649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.205799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.205952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.205979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.206169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.206325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.206353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.206475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.206633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.206660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.206788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.206911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.206938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.207088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.207234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.207265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.207419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.207574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.207602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.207785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.207938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.207964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.208132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.208284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.208311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.208445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.208603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.208630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.208755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.208910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.208939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.209118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.209273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.209299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.209465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.209594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.209621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.209780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.209960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.209987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.210122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.210245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.210277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.210473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.210620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.210647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.210828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.211013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.211039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.211181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.211334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.211360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.211518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.211671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.211697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.211856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.212015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.212042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.212196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.212328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.212355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.212509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.212644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.212679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.212838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.212961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.212988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.213118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.213269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.213296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.213481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.213615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.213644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.213774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.213929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.213957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.214109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.214269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.214296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.583 qpair failed and we were unable to recover it. 00:30:16.583 [2024-07-10 11:00:33.214459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.583 [2024-07-10 11:00:33.214645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.214671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2e98000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.214847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.214993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.215028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.215180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.215344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.215371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.215510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.215656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.215683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.215862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.216007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.216033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.216153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.216310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.216337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.216495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.216618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.216645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.216797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.216955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.216981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.217110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.217263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.217290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.217444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.217594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.217620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.217800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.217953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.217980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.218110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.218255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.218283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.218417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.218657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.218685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.218812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.218995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.219022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.219177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.219346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.219373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.219530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.219711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.219739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.219928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.220066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.220119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.220342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.220553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.220581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.220742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.220893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.220920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.221066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.221224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.221264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.221441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.221606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.221634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.221807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.221990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.222026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.222219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.222391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.222418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.222557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.222706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.222746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.222909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.223098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.223124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.223284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.223431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.223457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.223580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.223736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.223764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.584 qpair failed and we were unable to recover it. 00:30:16.584 [2024-07-10 11:00:33.223984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.584 [2024-07-10 11:00:33.224116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.224142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.224320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.224465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.224493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.224620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.224802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.224829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.224997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.225162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.225189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.225406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.225597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.225624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.225763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.225960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.225986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.226149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.226276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.226304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.226472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.226649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.226674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.226894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.227055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.227081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.227237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.227381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.227408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.227573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.227694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.227720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.227934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.228061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.228087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.228250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.228408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.228439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.228591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.228717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.228744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.228923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.229081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.229108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.229292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.229548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.229576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.229728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.229881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.229907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.230068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.230224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.230250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.230384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.230529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.230558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.230805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.230985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.231011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.231162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.231317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.231344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.231551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.231709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.231735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.231861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.232003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.232028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.232175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.232363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.232389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.232552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.232708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.232754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.232921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.233076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.233102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.585 [2024-07-10 11:00:33.233236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.233391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.585 [2024-07-10 11:00:33.233417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.585 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.233569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.233695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.233720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.233897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.234093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.234134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.234325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.234528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.234555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.234715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.234942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.234968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.235132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.235284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.235315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.235487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.235668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.235695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.235856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.236012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.236039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.236224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.236393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.236428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.236583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.236752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.236779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.236938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.237125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.237164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.237344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.237475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.237502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.237630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.237782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.237823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.238026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.238224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.238251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.238411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.238645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.238671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.238824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.239055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.239081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.239205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.239396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.239443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.239641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.239848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.239873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.240010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.240216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.240246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.240434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.240592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.240618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.240769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.240893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.240918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.241071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.241245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.241272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.241420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.241565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.241591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.241720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.241917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.586 [2024-07-10 11:00:33.241958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.586 qpair failed and we were unable to recover it. 00:30:16.586 [2024-07-10 11:00:33.242098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.242282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.242322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.242488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.242644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.242670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.242830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.243027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.243070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.243255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.243477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.243519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.243677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.243829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.243876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.244061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.244189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.244216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.244406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.244564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.244591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.244747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.244881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.244907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.245087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.245239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.245264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.245450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.245623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.245649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.245806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.245982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.246008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.246217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.246402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.246435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.246617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.246773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.246813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.247037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.247221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.247248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.247376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.247534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.247560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.247724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.247889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.247916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.248068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.248195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.248220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.248340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.248507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.248533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.248673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.248820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.248847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.249058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.249215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.249242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.249381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.249540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.249566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.249752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.249919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.249946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.250136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.250291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.250317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.250498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.250673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.250699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.250923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.251079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.251104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.251316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.251518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.251545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.587 [2024-07-10 11:00:33.251702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.251827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.587 [2024-07-10 11:00:33.251863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.587 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.252001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.252149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.252175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.252345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.252520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.252547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.252706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.252873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.252898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.253061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.253236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.253264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.253390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.253619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.253646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3587218 Killed "${NVMF_APP[@]}" "$@" 00:30:16.588 [2024-07-10 11:00:33.253836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.254022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 11:00:33 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:16.588 [2024-07-10 11:00:33.254049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 11:00:33 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:16.588 [2024-07-10 11:00:33.254224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 11:00:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:16.588 [2024-07-10 11:00:33.254454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.254489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 11:00:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:16.588 [2024-07-10 11:00:33.254655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:30:16.588 [2024-07-10 11:00:33.254843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.254870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.255016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.255173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.255199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.255356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.255511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.255538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.255714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.255836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.255861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.256052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.256211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.256252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.256442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.256596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.256623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.256799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.256938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.256964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.257118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.257248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.257275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.257460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.257614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.257641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.257792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.258020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.258046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.258214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.258378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.258405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.258563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.258744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.258771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 [2024-07-10 11:00:33.258954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 11:00:33 -- nvmf/common.sh@469 -- # nvmfpid=3587821 00:30:16.588 [2024-07-10 11:00:33.259108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 11:00:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:16.588 [2024-07-10 11:00:33.259135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 11:00:33 -- nvmf/common.sh@470 -- # waitforlisten 3587821 00:30:16.588 [2024-07-10 11:00:33.259263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 11:00:33 -- common/autotest_common.sh@819 -- # '[' -z 3587821 ']' 00:30:16.588 [2024-07-10 11:00:33.259444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.259481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 11:00:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.588 [2024-07-10 11:00:33.259634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 11:00:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:16.588 [2024-07-10 11:00:33.259779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.259806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.588 qpair failed and we were unable to recover it. 00:30:16.588 11:00:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.588 11:00:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:16.588 [2024-07-10 11:00:33.260000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:30:16.588 [2024-07-10 11:00:33.260169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.588 [2024-07-10 11:00:33.260196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.260321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.260473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.260501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.260663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.260841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.260867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.261001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.261180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.261207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.261348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.261485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.261512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.261664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.261800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.261827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.261984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.262111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.262138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.262292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.262416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.262448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.262588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.262715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.262743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.262907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.263051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.263078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.263233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.263373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.263399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.263563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.263719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.263746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.263901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.264050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.264075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.264216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.264375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.264401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.264569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.264697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.264723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.264870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.265027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.265056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.265222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.265381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.265407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.265573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.265700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.265725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.265875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.265996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.266021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.266171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.266317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.266343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.266472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.266631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.266657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.266787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.266922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.266950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.267133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.267263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.267289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.267441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.267598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.267625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.267802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.267931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.267956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.268086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.268233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.268259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.589 [2024-07-10 11:00:33.268404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.268543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.589 [2024-07-10 11:00:33.268568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.589 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.268755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.268935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.268961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.269092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.269265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.269290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.269444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.269582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.269608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.269762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.269877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.269903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.270084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.270270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.270295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.270434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.270560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.270586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.270755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.270883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.270911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.271065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.271220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.271245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.271377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.271547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.271572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.271727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.271903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.271928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.272103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.272255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.272283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.272419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.272592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.272619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.272809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.272964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.272990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.273146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.273298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.273324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.273452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.273581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.273608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.273766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.273897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.273923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.274081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.274235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.274263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.274410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.274569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.274596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.274731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.274862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.274888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.275032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.275204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.275230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.275388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.275541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.275570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.275726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.275904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.275931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.276063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.276204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.276229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.276387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.276575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.276603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.276740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.276895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.276921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.277048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.277175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.277200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.590 qpair failed and we were unable to recover it. 00:30:16.590 [2024-07-10 11:00:33.277388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.277517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.590 [2024-07-10 11:00:33.277543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.277676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.277800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.277825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.277976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.278162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.278188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.278311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.278462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.278489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.278645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.278768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.278794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.278923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.279072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.279097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.279226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.279381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.279408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.279567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.279721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.279749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.279902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.280060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.280086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.280268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.280395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.280422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.280560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.280710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.280737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.280872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.281026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.281051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.281206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.281337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.281364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.281489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.281620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.281646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.281807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.281944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.281970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.282119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.282275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.282302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.282465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.282597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.282623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.282780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.282913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.282939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.283108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.283259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.283285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.283410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.283597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.283623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.283762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.283945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.283971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.284117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.284294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.284321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.284453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.284584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.284610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.591 qpair failed and we were unable to recover it. 00:30:16.591 [2024-07-10 11:00:33.284761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.591 [2024-07-10 11:00:33.284894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.284919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.285055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.285180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.285206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.285333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.285517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.285544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.285691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.285811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.285837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.285991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.286169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.286195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.286325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.286458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.286484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.286615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.286782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.286808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.286956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.287120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.287146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.287302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.287435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.287461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.287619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.287744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.287770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.287956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.288085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.288111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.288260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.288443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.288470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.288626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.288757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.288782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.288943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.289091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.289116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.289275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.289433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.289460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.289654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.289788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.289821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.289977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.290136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.290161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.290315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.290486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.290514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.290673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.290831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.290856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.291012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.291190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.291216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.291371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.291537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.291564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.291692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.291826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.291851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.292003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.292128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.292156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.292309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.292456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.292483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.292613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.292774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.292799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.292930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.293073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.293099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.293236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.293417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.592 [2024-07-10 11:00:33.293448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.592 qpair failed and we were unable to recover it. 00:30:16.592 [2024-07-10 11:00:33.293573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.293705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.293737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.293890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.294013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.294043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.294232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.294351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.294376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.294560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.294686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.294724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.294859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.295015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.295042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.295170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.295325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.295351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.295499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.295656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.295682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.295866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.295991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.296018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.296197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.296344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.296370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.296560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.296716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.296742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.296869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.297022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.297053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.297212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.297384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.297423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.297588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.297771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.297796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.297926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.298078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.298104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.298257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.298410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.298451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.298635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.298758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.298783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.298940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.299097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.299123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.299267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.299389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.299416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.299578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.299734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.299762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.299919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.300101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.300126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.300276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.300445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.300475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.300631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.300755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.300780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.300907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.301057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.301082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.301238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.301399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.301423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.301586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.301716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.301743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.301919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.302048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.593 [2024-07-10 11:00:33.302073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.593 qpair failed and we were unable to recover it. 00:30:16.593 [2024-07-10 11:00:33.302210] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:30:16.593 [2024-07-10 11:00:33.302233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.302279] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.594 [2024-07-10 11:00:33.302413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.302454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.302584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.302728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.302753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.302893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.303074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.303100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.303257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.303380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.303405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.303577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.303728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.303754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.303917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.304070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.304097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.304267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.304416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.304447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.304610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.304758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.304784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.304905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.305055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.305081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.305263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.305392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.305418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.305558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.305677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.305704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.305856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.305990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.306016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.306146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.306325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.306351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.306483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.306628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.306654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.306817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.306992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.307018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.307179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.307358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.307384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.594 qpair failed and we were unable to recover it. 00:30:16.594 [2024-07-10 11:00:33.307544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.307724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.594 [2024-07-10 11:00:33.307751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.307904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.308058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.308084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.308267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.308395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.308420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.308619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.308741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.308767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.308896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.309044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.309071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.309223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.309373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.309399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.309532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.309662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.309689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.309843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.309972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.309997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.310136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.310290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.310316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.310496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.310679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.310704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.310858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.311038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.311064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.311186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.311309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.311335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.311460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.311593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.311619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.595 qpair failed and we were unable to recover it. 00:30:16.595 [2024-07-10 11:00:33.311766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.595 [2024-07-10 11:00:33.311908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.311934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.312087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.312217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.312243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.312381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.312514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.312540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.312725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.312868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.312896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.313052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.313185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.313212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.313370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.313525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.313552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.313679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.313805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.313832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.313957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.314138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.314164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.314313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.314473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.314500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.314629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.314757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.314787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.314969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.315142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.315168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.315294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.315413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.315444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.596 [2024-07-10 11:00:33.315602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.315762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.596 [2024-07-10 11:00:33.315788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.596 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.315915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.316057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.316084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.316263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.316414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.316446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.316637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.316792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.316817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.316949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.317102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.317129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.317309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.317436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.317462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.317591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.317720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.317748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.317879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.318032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.318058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.318206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.318352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.318377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.318514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.318669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.318695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.318848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.318969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.318995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.319126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.319302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.319327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.319475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.319630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.319656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.597 [2024-07-10 11:00:33.319803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.319935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.597 [2024-07-10 11:00:33.319962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.597 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.320095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.320241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.320267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.598 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.320449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.320579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.320605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.598 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.320759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.320887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.320914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.598 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.321044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.321201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.321227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.598 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.321396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.321539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.321565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.598 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.321722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.321855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.321880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.598 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.322036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.322165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.322191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.598 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.322348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.322535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.322562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.598 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.322725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.322853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.322879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.598 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.323008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.323169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.323196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.598 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.323349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.323527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.323554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.598 qpair failed and we were unable to recover it. 00:30:16.598 [2024-07-10 11:00:33.323679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.323832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.598 [2024-07-10 11:00:33.323858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.599 qpair failed and we were unable to recover it. 00:30:16.599 [2024-07-10 11:00:33.324013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.324173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.324198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.599 qpair failed and we were unable to recover it. 00:30:16.599 [2024-07-10 11:00:33.324354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.324508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.324535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.599 qpair failed and we were unable to recover it. 00:30:16.599 [2024-07-10 11:00:33.324661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.324848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.324874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.599 qpair failed and we were unable to recover it. 00:30:16.599 [2024-07-10 11:00:33.325047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.325195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.325219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.599 qpair failed and we were unable to recover it. 00:30:16.599 [2024-07-10 11:00:33.325350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.325490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.325516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.599 qpair failed and we were unable to recover it. 00:30:16.599 [2024-07-10 11:00:33.325676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.325813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.325840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.599 qpair failed and we were unable to recover it. 00:30:16.599 [2024-07-10 11:00:33.325996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.326120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.326146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.599 qpair failed and we were unable to recover it. 00:30:16.599 [2024-07-10 11:00:33.326299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.326481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.326508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.599 qpair failed and we were unable to recover it. 00:30:16.599 [2024-07-10 11:00:33.326641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.326768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.326793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.599 qpair failed and we were unable to recover it. 00:30:16.599 [2024-07-10 11:00:33.326925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.327059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.599 [2024-07-10 11:00:33.327085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.599 qpair failed and we were unable to recover it. 00:30:16.599 [2024-07-10 11:00:33.327212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.327354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.327381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.327523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.327681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.327708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.327861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.328013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.328039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.328219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.328367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.328392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.328527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.328656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.328684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.328837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.328986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.329012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.329200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.329347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.329372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.329527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.329710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.329736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.329867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.330022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.330048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.330207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.330339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.330365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.330520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.330687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.330712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.330841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.330992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.331018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.331198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.331350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.331376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.331540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.331716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.331742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.331903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.332059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.332086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.332242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.332396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.332421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.600 qpair failed and we were unable to recover it. 00:30:16.600 [2024-07-10 11:00:33.332588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.600 [2024-07-10 11:00:33.332713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.332739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.332898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.333082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.333109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.333287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.333436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.333462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.333595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.333750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.333787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.333937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.334078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.334114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.334269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.334394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.334420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.334580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.334743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.334778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.334924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.335046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.335072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.335257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.335449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.335477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.335608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.335761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.335788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.335926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.336065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.336091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.336240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.336362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.336388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.336536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.336667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.336693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.336821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.601 [2024-07-10 11:00:33.337002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.337028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.337163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.337314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.337340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.337501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.337654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.337679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.601 qpair failed and we were unable to recover it. 00:30:16.601 [2024-07-10 11:00:33.337819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.601 [2024-07-10 11:00:33.337996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.338021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.338148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.338305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.338333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.338518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.338644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.338669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.338804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.338936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.338961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.339121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.339273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.339300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.339454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.339608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.339633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.339793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.339972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.339997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.340153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.340305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.340330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.340486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.340622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.340648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.340775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.340924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.340949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.341104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.341230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.341255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.341381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.341517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.341543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.341667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.341792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.341819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.341972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.342151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.342177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.342332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.342517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.342544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.342674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.342820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.342846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.602 qpair failed and we were unable to recover it. 00:30:16.602 [2024-07-10 11:00:33.343004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.343123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.602 [2024-07-10 11:00:33.343149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.343304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.343481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.343507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.343633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.343777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.343801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.343986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.344155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.344180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.344309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.344440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.344467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.344600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.344751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.344777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.344903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.345028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.345053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.345201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.345357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.345382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.345541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.345663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.345690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.345857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.345995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.346027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.346188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.346321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.346346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.346503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.346648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.346674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.346799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.346930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.346956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.347110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.347233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.347260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.347392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.347581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.347607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.347779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.347936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.347963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.348119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.348298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.348324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.348494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.348627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.348653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.348817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.348996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.603 [2024-07-10 11:00:33.349022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.603 qpair failed and we were unable to recover it. 00:30:16.603 [2024-07-10 11:00:33.349174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.349326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.349356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.349489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.349643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.349668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.349802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.349957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.349983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.350117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.350262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.350289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.350481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.350609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.350636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.350789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.350940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.350966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.351096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.351273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.351300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.351479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.351609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.351636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.351767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.351954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.351980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.352136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.352277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.352302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.352435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.352592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.352621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.352746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.352876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.352903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.353083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.353242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.353268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.353452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.353607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.353632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.353770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.353947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.353972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.354123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.354273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.354299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.354455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.354605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.354630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.604 qpair failed and we were unable to recover it. 00:30:16.604 [2024-07-10 11:00:33.354768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.604 [2024-07-10 11:00:33.354915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.354940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.355099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.355276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.355302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.355454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.355628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.355654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.355811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.355962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.355994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.356126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.356254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.356281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.356412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.356599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.356626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.356792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.356971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.356997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.357152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.357277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.357304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.357477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.357633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.357659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.357851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.357997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.358023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.358162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.358295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.358321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.358511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.358681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.358718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.358873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.359003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.359031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.359188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.359308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.359335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.359477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.359637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.359663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.359826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.359970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.359996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.605 qpair failed and we were unable to recover it. 00:30:16.605 [2024-07-10 11:00:33.360150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.360296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.605 [2024-07-10 11:00:33.360322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.360454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.360628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.360654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.360787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.360917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.360943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.361071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.361195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.361221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.361375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.361504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.361530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.361654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.361787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.361813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.361964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.362091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.362117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.362244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.362397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.362429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.362559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.362727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.362752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.362938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.363057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.363083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.363210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.363355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.363381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.363514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.363667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.363692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.363819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.363943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.363968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.364122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.364298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.364324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.364474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.364632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.364658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.364805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.364977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.365003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.606 qpair failed and we were unable to recover it. 00:30:16.606 [2024-07-10 11:00:33.365136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.365290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.606 [2024-07-10 11:00:33.365317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.607 qpair failed and we were unable to recover it. 00:30:16.607 [2024-07-10 11:00:33.365460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.365618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.365644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.607 qpair failed and we were unable to recover it. 00:30:16.607 [2024-07-10 11:00:33.365796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.365949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.365976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.607 qpair failed and we were unable to recover it. 00:30:16.607 [2024-07-10 11:00:33.366163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.366289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.366315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.607 qpair failed and we were unable to recover it. 00:30:16.607 [2024-07-10 11:00:33.366471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.366616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.366643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.607 qpair failed and we were unable to recover it. 00:30:16.607 [2024-07-10 11:00:33.366800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.366924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.366950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.607 qpair failed and we were unable to recover it. 00:30:16.607 [2024-07-10 11:00:33.367152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.367308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.367334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.607 qpair failed and we were unable to recover it. 00:30:16.607 [2024-07-10 11:00:33.367532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.367663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.367688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.607 qpair failed and we were unable to recover it. 00:30:16.607 [2024-07-10 11:00:33.367901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.368071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.368096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.607 qpair failed and we were unable to recover it. 00:30:16.607 [2024-07-10 11:00:33.368393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.368560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.368584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.607 qpair failed and we were unable to recover it. 00:30:16.607 [2024-07-10 11:00:33.368741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.368896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.607 [2024-07-10 11:00:33.368923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.607 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.369075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.369242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.369269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.369436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.369587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.369612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.369746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.369897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.369923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.370070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.370201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.370229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.370384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.370550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.370576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.370698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.370874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.370899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.371020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.371177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.371203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.371356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.371508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.371535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.371668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.371818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.371844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.371966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.372116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.372141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.372287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.372406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.372445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.372630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.372774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.372805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.372988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.373133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.373159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.373316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.373469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.373496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.373631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.373779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.608 [2024-07-10 11:00:33.373806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.608 qpair failed and we were unable to recover it. 00:30:16.608 [2024-07-10 11:00:33.373936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.374113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.374139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.374294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.374461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.374488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.374713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.374841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.374868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.374998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.375160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.375186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.375346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.375481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.375507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.375666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.375811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.375836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.375966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.376082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.376107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.376259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.376395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.376420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.376563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.376712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.376720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.888 [2024-07-10 11:00:33.376738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.376863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.377057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.377083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.377209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.377363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.377389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.377552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.377701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.377726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.377864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.377985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.888 [2024-07-10 11:00:33.378011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.888 qpair failed and we were unable to recover it. 00:30:16.888 [2024-07-10 11:00:33.378219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.378368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.378393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.378525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.378706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.378731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.378882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.379035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.379062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.379223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.379377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.379403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea0000b90 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.379621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.379796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.379826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.379985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.380121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.380147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.380301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.380448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.380475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.380611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.380753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.380779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.380936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.381117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.381143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.381298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.381462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.381487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.381629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.381786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.381811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.381939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.382065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.382090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.382242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.382395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.382421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.382583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.382704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.382729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.382848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.382969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.382994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.383145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.383305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.383330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.383478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.383628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.383654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.383811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.383983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.384008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.384157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.384305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.384330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.384504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.384657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.384683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.384830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.384978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.385004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.385166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.385314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.385339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.385464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.385616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.385642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.889 qpair failed and we were unable to recover it. 00:30:16.889 [2024-07-10 11:00:33.385792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.889 [2024-07-10 11:00:33.385932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.385957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.386118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.386270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.386295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.386414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.386553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.386579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.386746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.386914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.386939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.387065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.387230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.387256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.387407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.387571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.387598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.387717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.387871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.387897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.388079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.388246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.388272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.388400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.388574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.388600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.388761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.388915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.388941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.389119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.389277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.389302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.389444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.389579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.389606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.389768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.389915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.389941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.390091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.390214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.390242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.390410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.390591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.390618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.390740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.390883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.390908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.391063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.391197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.391222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.391378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.391527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.391554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.391685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.391827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.391853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.392034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.392188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.392214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.392398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.392557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.392588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.392715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.392864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.392890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.393026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.393181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.393207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.393394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.393554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.393581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.393735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.393850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.393875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.394057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.394206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.394232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.394390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.394522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.394550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.890 qpair failed and we were unable to recover it. 00:30:16.890 [2024-07-10 11:00:33.394710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.890 [2024-07-10 11:00:33.394861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.394886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.395045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.395227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.395264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.395449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.395601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.395627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.395806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.395956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.395982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.396144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.396296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.396322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.396446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.396566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.396592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.396743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.396863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.396890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.397048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.397173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.397198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.397358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.397505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.397532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.397697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.397877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.397914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.398071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.398252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.398277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.398422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.398566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.398592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.398721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.398871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.398896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.399053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.399180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.399205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.399336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.399488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.399516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.399697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.399868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.399895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.400051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.400178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.400204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.400384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.400537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.400564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.400716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.400874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.400899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.401055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.401230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.401255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.401423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.401554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.401580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.401733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.401912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.401938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.402066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.402242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.402267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.402418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.402578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.402604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.402763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.402883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.402909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.403074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.403230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.403256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.403382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.403557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.891 [2024-07-10 11:00:33.403584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.891 qpair failed and we were unable to recover it. 00:30:16.891 [2024-07-10 11:00:33.403723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.403872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.403898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.404052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.404202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.404230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.404379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.404560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.404587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.404742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.404899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.404927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.405087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.405263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.405289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.405443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.405591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.405617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.405755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.405877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.405903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.406061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.406216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.406243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.406412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.406551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.406577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.406731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.406856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.406883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.407069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.407249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.407275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.407435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.407610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.407637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.407790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.407917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.407943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.408093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.408244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.408269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.408423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.408584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.408610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.408799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.408923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.408948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.409077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.409227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.409252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.409402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.409589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.409624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.409780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.409925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.409951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.410075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.410226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.410252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.410407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.410574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.410601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.410750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.410872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.410898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.411047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.411169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.411196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.411326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.411446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.411473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.892 qpair failed and we were unable to recover it. 00:30:16.892 [2024-07-10 11:00:33.411600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.892 [2024-07-10 11:00:33.411780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.411806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.411945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.412069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.412096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.412278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.412408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.412452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.412577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.412729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.412755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.412915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.413038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.413064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.413204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.413388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.413430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.413587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.413736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.413762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.413885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.414039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.414066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.414215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.414408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.414442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.414615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.414953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.414983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.415189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.415344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.415370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.415523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.415655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.415681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.415873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.416027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.416065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.416213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.416393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.416436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.416632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.416776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.416802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.416932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.417110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.417136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.417324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.417481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.417507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.417659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.417819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.417844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.417997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.418167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.418193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.418327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.418457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.418484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.418613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.418789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.418815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.418964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.419082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.419110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.893 qpair failed and we were unable to recover it. 00:30:16.893 [2024-07-10 11:00:33.419269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.893 [2024-07-10 11:00:33.419396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.419439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.419562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.419707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.419741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.419899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.420047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.420073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.420227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.420350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.420377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.420541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.420716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.420753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.420906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.421081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.421107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.421235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.421365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.421392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.421554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.421705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.421742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.421924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.422056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.422082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.422223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.422353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.422378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.422552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.422727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.422753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.422904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.423059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.423085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.423222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.423372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.423398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.423565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.423719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.423745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.423898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.424051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.424077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.424231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.424409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.424452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.424603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.424751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.424777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.424933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.425085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.425111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.425267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.425416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.425449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.425599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.425728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.425753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.425910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.426091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.426117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.426271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.426417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.426448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.426630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.426794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.426824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.426981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.427139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.427165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.427282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.427437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.427464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.427610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.427728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.427754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.427930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.428073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.428099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.894 qpair failed and we were unable to recover it. 00:30:16.894 [2024-07-10 11:00:33.428247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.894 [2024-07-10 11:00:33.428367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.428393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.428591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.428742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.428769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.428900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.429051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.429077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.429229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.429389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.429431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.429625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.429751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.429777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.429933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.430064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.430090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.430219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.430443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.430471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.430613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.430761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.430788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.430942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.431118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.431144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.431298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.431453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.431480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.431629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.431813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.431838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.431997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.432155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.432181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.432341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.432508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.432534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.432663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.432855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.432880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.433009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.433138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.433164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.433283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.433403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.433445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.433599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.433758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.433784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.433946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.434076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.434102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.434272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.434450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.434476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.434640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.434796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.434833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.434994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.435155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.435182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.435347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.435518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.435545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.435730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.435861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.435887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.436078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.436225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.436257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.436438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.436607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.436636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.436803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.436939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.436965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.437167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.437327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.437353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-10 11:00:33.437492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-10 11:00:33.437639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.437665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.437822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.438000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.438026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.438191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.438345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.438370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.438536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.438714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.438742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.438921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.439073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.439099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.439281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.439458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.439485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.439642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.439794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.439819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.439944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.440076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.440101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.440234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.440388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.440430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.440567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.440731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.440757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.440909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.441063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.441090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.441212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.441397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.441439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.441612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.441771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.441797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.441975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.442168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.442194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.442344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.442510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.442538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.442695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.442857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.442883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.443062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.443214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.443239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.443362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.443506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.443533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.443665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.443792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.443818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.443967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.444122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.444152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.444308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.444482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.444509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.444674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.444844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.444869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.445025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.445168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.445193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.445319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.445463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.445489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.445641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.445801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.445827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.445981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.446105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.446130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.446269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.446382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.446408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-10 11:00:33.446567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-10 11:00:33.446734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.446759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.446909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.447065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.447091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.447221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.447371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.447401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.447560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.447720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.447746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.447868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.447997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.448022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.448155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.448280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.448305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.448474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.448653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.448679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.448867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.449013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.449039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.449189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.449368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.449393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.449552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.449711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.449738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.449879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.450011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.450037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.450197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.450352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.450379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.450540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.450720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.450746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.450900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.451057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.451083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.451260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.451434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.451459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.451603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.451751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.451778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.451933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.452052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.452078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.452207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.452365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.452392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.452530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.452714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.452741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.452872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.453022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.453048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.453208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.453332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.453358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.453552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.453713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.453739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.453921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.454108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.454134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.454297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.454416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.454448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.454631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.454755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.454782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.454948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.455066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.455092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.455255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.455372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.455397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-10 11:00:33.455561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-10 11:00:33.455690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.455726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.455909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.456039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.456067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.456196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.456380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.456407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.456572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.456713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.456738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.456912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.457064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.457090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.457244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.457375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.457400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.457525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.457656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.457682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.457840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.457993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.458018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.458143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.458294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.458320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.458449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.458574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.458599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.458780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.458931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.458957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.459103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.459247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.459273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.459442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.459596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.459624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.459783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.459916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.459941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.460068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.460216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.460243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.460394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.460570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.460598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.460727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.460885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.460910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.461068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.461220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.461246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.461399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.461540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.461567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.461749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.461927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.461953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.462114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.462296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.462322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.462507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.462665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.462692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.462834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.462988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.463014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.898 qpair failed and we were unable to recover it. 00:30:16.898 [2024-07-10 11:00:33.463187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.463314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.898 [2024-07-10 11:00:33.463339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.463524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.463667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.463693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.463834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.463991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.464016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.464143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.464290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.464320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.464498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.464675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.464701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.464854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.464984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.465009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.465191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.465346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.465371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.465524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.465651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.465678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.465842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.465973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.466000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.466134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.466312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.466338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.466495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.466612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.466638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.466788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.466939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.466965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.467146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.467264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.467290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.467443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.467604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.467630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.467804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.467952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.467978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.468128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.468277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.468303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.468457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.468639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.468665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.468829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.468957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.468983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.469144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.469267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.469293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.469419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.469580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.469606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.469735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.469888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.469914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.470073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.470223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.470249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.470408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.470572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.470599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.470787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.470935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.470961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.471088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.471233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.471259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.471384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.471606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.471633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.471795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.471974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.472000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.899 qpair failed and we were unable to recover it. 00:30:16.899 [2024-07-10 11:00:33.472158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.899 [2024-07-10 11:00:33.472281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.472308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.472500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.472660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.472688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.472853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.472972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.472999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.473158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.473284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.473310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.473441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.473590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.473616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.473744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.473861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.473870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:16.900 [2024-07-10 11:00:33.473887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.473995] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.900 [2024-07-10 11:00:33.474015] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.900 [2024-07-10 11:00:33.474029] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.900 [2024-07-10 11:00:33.474043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.474106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:16.900 [2024-07-10 11:00:33.474167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.474193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.474127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:16.900 [2024-07-10 11:00:33.474196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:16.900 [2024-07-10 11:00:33.474199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:16.900 [2024-07-10 11:00:33.474352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.474484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.474510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.474628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.474790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.474815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.474980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.475164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.475190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.475320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.475509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.475535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.475696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.475828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.475854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.476004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.476128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.476154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.476292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.476448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.476476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.476628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.476844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.476870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.477024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.477194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.477220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.477348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.477499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.477525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.477671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.477835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.477860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.477989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.478113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.478139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.478264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.478399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.478438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.478581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.478701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.478734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.478891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.479030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.479056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.479201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.479319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.479345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.900 qpair failed and we were unable to recover it. 00:30:16.900 [2024-07-10 11:00:33.479507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.900 [2024-07-10 11:00:33.479655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.479681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.479870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.479987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.480012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.480140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.480269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.480295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.480422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.480558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.480584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.480716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.480923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.480949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.481082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.481215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.481241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.481360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.481551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.481579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.481714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.481836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.481873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.481996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.482154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.482179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.482306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.482435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.482461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.482586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.482715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.482743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.482865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.483049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.483075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.483198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.483353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.483390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.483574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.483727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.483753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.483877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.484028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.484054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.484178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.484306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.484332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.484484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.484608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.484633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.484797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.484919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.484945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.485067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.485180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.485206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.485339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.485463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.485493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.485624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.485769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.485802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.485931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.486077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.486102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.486269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.486413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.486444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.486610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.486761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.486787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.486925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.487083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.487109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.487241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.487361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.487386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.487543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.487667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.487693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.901 qpair failed and we were unable to recover it. 00:30:16.901 [2024-07-10 11:00:33.487829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.901 [2024-07-10 11:00:33.487978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.488003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.488123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.488259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.488285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.488446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.488574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.488602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.488768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.488947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.488973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.489116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.489271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.489297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.489457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.489608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.489634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.489787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.489919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.489946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.490103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.490255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.490280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.490400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.490575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.490602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.490752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.490888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.490913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.491032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.491184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.491210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.491328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.491466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.491493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.491644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.491801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.491827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.491989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.492144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.492170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.492326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.492446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.492473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.492625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.492743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.492768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.492931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.493065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.493091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.493217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.493337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.493364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.493510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.493634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.493659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.493849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.493999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.494024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.494146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.494265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.494291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.494469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.494646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.494672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.494849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.495004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.495030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.495154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.495297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.495323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.495490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.495611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.495637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.495752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.495906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.495932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.496084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.496242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.496268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.902 qpair failed and we were unable to recover it. 00:30:16.902 [2024-07-10 11:00:33.496388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.902 [2024-07-10 11:00:33.496568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.496595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.496738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.496886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.496912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.497072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.497218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.497243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.497374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.497544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.497571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.497719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.497871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.497897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.498027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.498178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.498203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.498323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.498453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.498479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.498636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.498752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.498778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.498933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.499112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.499138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.499262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.499382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.499423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.499591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.499829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.499855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.500002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.500147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.500172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.500324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.500466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.500492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.500647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.500789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.500814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.501045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.501204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.501229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.501348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.501505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.501531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.501679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.501847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.501873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.502063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.502196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.502222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.502380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.502504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.502530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.502689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.502845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.502876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.503009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.503162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.503189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.503329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.503460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.503487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.503613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.503742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.903 [2024-07-10 11:00:33.503767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.903 qpair failed and we were unable to recover it. 00:30:16.903 [2024-07-10 11:00:33.503902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.504032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.504059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.504184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.504308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.504336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.504477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.504627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.504652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.504774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.504925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.504952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.505109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.505232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.505259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.505386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.505562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.505589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.505753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.505901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.505928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.506089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.506299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.506326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.506473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.506621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.506646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.506786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.506915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.506942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.507095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.507217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.507244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.507362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.507506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.507533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.507667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.507834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.507860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.508010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.508135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.508161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.508288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.508460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.508498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.508634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.508763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.508789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.508971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.509099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.509127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.509262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.509379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.509406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.509540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.509688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.509714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.509890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.510048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.510074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.510217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.510371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.510398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.510544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.510692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.510717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.510840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.510962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.904 [2024-07-10 11:00:33.510988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.904 qpair failed and we were unable to recover it. 00:30:16.904 [2024-07-10 11:00:33.511138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.511281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.511307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.511457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.511612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.511638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.511770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.511927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.511954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.512124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.512278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.512304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.512448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.512621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.512647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.512835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.512984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.513010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.513142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.513297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.513324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.513485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.513634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.513660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.513828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.513975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.514001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.514137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.514266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.514292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.514411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.514550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.514578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.514718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.514872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.514898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.515019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.515145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.515172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.515307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.515531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.515558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.515707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.515842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.515869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.516030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.516236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.516262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.516405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.516579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.516605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.516738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.516949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.516975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.517132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.517259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.517285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.517417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.517555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.517581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.517697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.517875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.517901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.518019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.518173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.518201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.518350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.518505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.518531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.518671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.518810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.518837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.905 qpair failed and we were unable to recover it. 00:30:16.905 [2024-07-10 11:00:33.518991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.519117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.905 [2024-07-10 11:00:33.519148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.519288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.519481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.519508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.519664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.519834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.519860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.519985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.520136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.520163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.520316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.520457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.520484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.520642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.520766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.520792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.520923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.521049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.521074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.521224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.521377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.521403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.521570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.521737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.521763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.521887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.522014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.522040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.522196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.522325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.522352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.522532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.522664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.522690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.522819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.522967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.522993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.523111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.523253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.523279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.523441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.523579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.523605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.523726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.523873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.523899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.524028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.524159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.524185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.524308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.524434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.524460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.524611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.524783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.524811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.524968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.525118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.525144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.525267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.525388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.525415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.525594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.525718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.525748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.525919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.526036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.526063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.906 [2024-07-10 11:00:33.526212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.526341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.906 [2024-07-10 11:00:33.526368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.906 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.526514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.526638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.526664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.526786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.526936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.526962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.527171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.527293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.527320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.527470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.527613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.527639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.527757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.527877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.527903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.528040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.528161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.528188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.528304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.528459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.528489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.528643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.528779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.528806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.528943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.529099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.529125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.529283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.529418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.529450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.529612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.529739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.529765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.529893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.530043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.530069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.530218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.530372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.530398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.530539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.530663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.530689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.530832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.530976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.531003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.531154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.531271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.531297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.531418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.531558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.531583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.531762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.531899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.531925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.532051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.532172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.532200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.532355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.532509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.532565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.532686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.532819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.532844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.532996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.533121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.533146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.533262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.533376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.533402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.533538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.533663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.533688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.533852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.534010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.534036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.534175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.534299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.534327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.907 [2024-07-10 11:00:33.534487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.534631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.907 [2024-07-10 11:00:33.534656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.907 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.534785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.534965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.534995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.535157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.535274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.535300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.535438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.535660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.535685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.535897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.536109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.536135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.536307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.536456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.536486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.536627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.536797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.536824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.536972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.537100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.537128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.537283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.537435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.537462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.537596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.537723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.537749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.537883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.538016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.538043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.538190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.538343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.538370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.538518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.538656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.538681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.538841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.538996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.539022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.539146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.539268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.539294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.539438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.539575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.539600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.539738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.539877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.539904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.540055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.540176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.540202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.540363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.540512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.540539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.540665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.540790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.540816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.540971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.541087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.541113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.541234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.541365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.541391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.541557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.541705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.541730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.908 qpair failed and we were unable to recover it. 00:30:16.908 [2024-07-10 11:00:33.541876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.908 [2024-07-10 11:00:33.541992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.542018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.542177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.542328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.542355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.542471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.542603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.542628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.542764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.542917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.542944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.543072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.543206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.543232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.543391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.543538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.543564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.543686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.543861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.543887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.544006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.544164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.544190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.544313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.544464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.544491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.544647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.544765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.544791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.544908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.545050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.545076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.545283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.545464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.545490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.545619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.545752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.545778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.545935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.546057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.546083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.546223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.546351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.546377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.546513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.546674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.546700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.546861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.547025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.547052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.547215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.547393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.547419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.547571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.547719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.547745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.547936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.548085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.548111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.548232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.548406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.548438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.548566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.548704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.548737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.548865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.549002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.549029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.549188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.549346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.549374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.549567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.549714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.549742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.549899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.550023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.550049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.550183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.550335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.550361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.909 qpair failed and we were unable to recover it. 00:30:16.909 [2024-07-10 11:00:33.550511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.909 [2024-07-10 11:00:33.550656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.550682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.550837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.550989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.551016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.551180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.551336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.551366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.551524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.551644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.551670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.551841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.551958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.551984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.552114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.552240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.552266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.552422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.552628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.552654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.552779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.552905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.552931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.553064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.553182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.553208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.553368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.553498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.553525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.553646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.553759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.553785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.553923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.554044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.554072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.554200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.554348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.554374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.554511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.554690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.554716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.554865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.555019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.555045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.555168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.555323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.555350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.555489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.555631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.555657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.555796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.555947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.555973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.556102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.556255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.556281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.556397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.556524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.556550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.556672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.556805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.556831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.556951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.557132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.557158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.557283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.557408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.557441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.557587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.557726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.557752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.557903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.558018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.558044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.558181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.558331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.558357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.558526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.558682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.558708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.910 qpair failed and we were unable to recover it. 00:30:16.910 [2024-07-10 11:00:33.558848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.910 [2024-07-10 11:00:33.558994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.559020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.559140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.559261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.559287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.559404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.559567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.559593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.559713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.559863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.559889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.560047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.560178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.560205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.560329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.560507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.560534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.560662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.560808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.560834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.560992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.561148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.561174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.561304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.561432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.561459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.561579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.561696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.561722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.561884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.562022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.562048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.562168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.562320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.562346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.562472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.562592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.562619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.562736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.562885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.562911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.563037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.563167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.563193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.563370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.563503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.563530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.563662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.563816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.563843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.564007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.564128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.564154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.564315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.564448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.564475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.564628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.564750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.564777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.564940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.565073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.565099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.565239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.565357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.565383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.565627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.565756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.565782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.566000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.566138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.566164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.566297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.566449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.566476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.566629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.566762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.566790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.566922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.567041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.567071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.911 qpair failed and we were unable to recover it. 00:30:16.911 [2024-07-10 11:00:33.567229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.911 [2024-07-10 11:00:33.567347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.567373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.567510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.567629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.567656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.567784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.567942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.567968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.568160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.568290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.568316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.568457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.568606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.568632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.568791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.568972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.568999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.569127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.569290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.569316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.569447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.569569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.569596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.569741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.569886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.569912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.570066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.570223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.570253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.570408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.570570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.570596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.570725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.570848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.570874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.571012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.571169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.571196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.571321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.571458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.571485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.571627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.571744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.571770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.571897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.572012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.572039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.572197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.572346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.572373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.572501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.572646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.572672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.572795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.572947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.572974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.573096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.573216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.573243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.573432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.573562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.573588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.573720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.573864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.573890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.574052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.574168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.574195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.574334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.574460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.574487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.574611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.574750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.574776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.574954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.575101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.575127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.575252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.575378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.575405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.912 qpair failed and we were unable to recover it. 00:30:16.912 [2024-07-10 11:00:33.575562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.912 [2024-07-10 11:00:33.575693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.575719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.575839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.575986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.576012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.576169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.576302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.576330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.576499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.576629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.576655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.576819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.576934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.576961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.577118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.577242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.577269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.577445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.577603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.577629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.577758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.577931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.577957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.578114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.578260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.578286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.578436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.578558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.578584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.578736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.578864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.578890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.579060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.579191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.579217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.579372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.579516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.579543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.579669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.579800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.579827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.579981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.580100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.580126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.580276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.580423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.580455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.580617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.580761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.580787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.580945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.581094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.581120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.581255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.581409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.581441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.581558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.581698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.581724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.581879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.581996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.582022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.582156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.582284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.582310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.582468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.582598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.582624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.913 [2024-07-10 11:00:33.582806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.582966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.913 [2024-07-10 11:00:33.582992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.913 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.583118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.583298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.583325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.583484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.583611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.583637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.583760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.583879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.583904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.584049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.584177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.584203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.584322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.584466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.584495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.584619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.584771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.584797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.584965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.585095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.585121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.585250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.585400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.585432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.585590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.585717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.585743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.585864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.585995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.586027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.586181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.586314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.586340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.586461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.586602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.586628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.586783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.586899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.586925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.587114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.587245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.587271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.587429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.587555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.587581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.587702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.587858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.587885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.588015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.588138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.588165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.588283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.588402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.588433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.588555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.588674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.588700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.588842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.588973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.589000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.589154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.589281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.589307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.589428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.589575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.589601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.589753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.589874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.589900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.590055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.590182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.590209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.590335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.590474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.590501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.590620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.590800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.590826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.914 [2024-07-10 11:00:33.590959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.591092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.914 [2024-07-10 11:00:33.591118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.914 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.591298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.591445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.591472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.591604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.591734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.591761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.591926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.592057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.592082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.592235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.592393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.592419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.592578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.592712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.592738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.592865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.592985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.593011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.593183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.593304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.593330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.593483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.593601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.593627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.593781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.593895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.593921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.594071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.594188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.594214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.594340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.594471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.594499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.594679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.594798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.594824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.594962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.595147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.595173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.595338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.595464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.595491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.595625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.595754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.595781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.595937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.596051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.596076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.596230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.596348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.596374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.596543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.596665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.596691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.596842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.596966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.596991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.597169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.597291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.597317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.597445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.597592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.597618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.597743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.597896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.597922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.598080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.598198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.598224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.598352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.598486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.598514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.598634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.598782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.598809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.598961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.599081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.599107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.915 [2024-07-10 11:00:33.599232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.599348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.915 [2024-07-10 11:00:33.599375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.915 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.599526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.599649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.599676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.599818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.599967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.599993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.600112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.600257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.600283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.600418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.600573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.600600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.600748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.600870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.600896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.601049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.601171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.601198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.601321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.601473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.601504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.601690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.601812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.601838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.602006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.602154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.602180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.602301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.602452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.602480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.602607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.602727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.602753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.602900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.603016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.603041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.603179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.603311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.603337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.603461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.603619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.603645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.603770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.603890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.603916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.604064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.604218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.604245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.604371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.604495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.604521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.604683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.604806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.604832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.604986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.605135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.605161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.605318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.605471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.605497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.605652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.605777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.605803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.605959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.606081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.606108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.606258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.606385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.606411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.606535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.606687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.606713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.606863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.607013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.607039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.607163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.607328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.607354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.916 qpair failed and we were unable to recover it. 00:30:16.916 [2024-07-10 11:00:33.607512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.916 [2024-07-10 11:00:33.607668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.607697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.607830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.607965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.607991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.608126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.608273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.608298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.608452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.608569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.608596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.608734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.608883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.608909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.609066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.609219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.609245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.609371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.609522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.609549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.609676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.609823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.609850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.609971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.610089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.610115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.610235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.610417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.610448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.610571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.610697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.610723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.610848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.610962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.610988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.611103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.611252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.611278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.611407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.611574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.611601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.611719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.611873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.611899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.612054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.612176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.612202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.612335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.612465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.612492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.612638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.612783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.612809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.612937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.613088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.613114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.613238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.613360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.613388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.613531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.613658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.613684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.613842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.613976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.614002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.614151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.614274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.614300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.614418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.614573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.614599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.614722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.614858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.917 [2024-07-10 11:00:33.614884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.917 qpair failed and we were unable to recover it. 00:30:16.917 [2024-07-10 11:00:33.615034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.615164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.615190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.615357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.615519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.615546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.615673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.615827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.615853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.615973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.616087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.616113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.616277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.616390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.616416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.616541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.616679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.616705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.616826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.616961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.616991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.617118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.617291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.617317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.617459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.617589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.617615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.617735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.617882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.617908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.618060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.618186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.618212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.618367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.618494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.618520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.618652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.618768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.618794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.618945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.619071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.619096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.619219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.619335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.619360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.619524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.619652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.619678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.619832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.619967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.619998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.620129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.620265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.620291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.918 qpair failed and we were unable to recover it. 00:30:16.918 [2024-07-10 11:00:33.620440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.918 [2024-07-10 11:00:33.620561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.620589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.620718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.620838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.620865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.621003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.621138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.621164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.621328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.621479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.621506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.621625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.621782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.621808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.621961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.622084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.622110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.622232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.622382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.622408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.622544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.622669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.622695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.622821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.622942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.622967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.623124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.623270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.623296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.623450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.623587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.623613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.623744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.623873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.623899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.624035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.624154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.624180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.624328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.624451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.624477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.624608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.624792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.624818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.624967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.625086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.625112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.625245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.625391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.625417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.625546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.625702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.625727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.625852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.626016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.626043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.626166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.626309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.626336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.626467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.626621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.626648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.626779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.626937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.626963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.627086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.627209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.627237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.627362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.627517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.627544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.627674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.627791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.627817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.627940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.628068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.628094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.628226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.628347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.628373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.628510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.628673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.628699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.628825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.629006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.629032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.629179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.629309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.629337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.629474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.629607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.629633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.629787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.629906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.629932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.630080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.630208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.630233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.630352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.630477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.630505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.630685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.630833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.630859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.630977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.631096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.631122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.631244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.631407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.631439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.631558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.631703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.631730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.631883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.632010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.632036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.632156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.632320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.632346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.632464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.632617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.632644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.632767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.632893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.632920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.633049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.633163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.633188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.633366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.633497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.633524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.633644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.633771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.633797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.633938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.634089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.634115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.634265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.634395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.634432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.634577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.634700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.634727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.634888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.635016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.635042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.635198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.635328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.635359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.635508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.635624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.919 [2024-07-10 11:00:33.635650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.919 qpair failed and we were unable to recover it. 00:30:16.919 [2024-07-10 11:00:33.635782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.635942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.635968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.636099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.636235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.636262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.636380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.636512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.636557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.636717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.636868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.636894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.637059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.637207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.637233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.637363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.637483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.637510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.637672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.637792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.637818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.637956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.638109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.638135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.638288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.638414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.638445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.638601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.638751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.638777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.638902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.639061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.639087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.639233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.639358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.639386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.639527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.639655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.639682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.639801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.639931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.639957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.640087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.640266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.640292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.640420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.640570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.640597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.640718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.640871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.640897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.641018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.641139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.641164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.641313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.641467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.641495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.641625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.641744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.641771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.641901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.642050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.642076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.642206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.642327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.642353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.642534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.642660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.642685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.642847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.642971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.642997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.643119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.643278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.643303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.643466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.643588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.643614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.643734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.643852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.643878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.644003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.644133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.644161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.644286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.644436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.644463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.644596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.644745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.644771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.644889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.645063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.645089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.645242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.645369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.645396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.645529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.645661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.645687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.645842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.645990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.646015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.646181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.646329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.646355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.646512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.646646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.646672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.646789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.646936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.646962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.647082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.647198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.647224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.647348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.647469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.647496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.647612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.647731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.647757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.647901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.648033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.648059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.648220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.648339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.648365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.648529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.648646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.648672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.648852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.649001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.649027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.649190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.649319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.649344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.649473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.649624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.649650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.649779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.649908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.649936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.650115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.650256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.650282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.650443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.650582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.650608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.650764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.650895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.650925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.651093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.651242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.651268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.920 [2024-07-10 11:00:33.651401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.651560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.920 [2024-07-10 11:00:33.651587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.920 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.651720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.651873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.651899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.652065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.652193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.652219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.652400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.652561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.652588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.652743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.652868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.652895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.653044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.653193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.653220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.653364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.653489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.653517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.653660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.653826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.653852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.653969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.654097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.654123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.654249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.654364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.654391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.654533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.654655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.654683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.654865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.654982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.655009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.655154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.655283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.655309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.655441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.655591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.655617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.655740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.655888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.655915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.656038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.656190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.656216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.656358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.656525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.656552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.656677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.656809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.656835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.656966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.657105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.657131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.657288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.657449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.657476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.657593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.657729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.657754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.657911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.658064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.658090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.658212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.658346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.658372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.658504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.658629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.658655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.658779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.658907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.658933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.659059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.659176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.659202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.659333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.659454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.659481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.659610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.659727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.659753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.659932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.660082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.660108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.660266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.660399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.660432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.660551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.660706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.660732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.660864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.661040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.661066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.661194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.661342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.661368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.661523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.661655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.661683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.661834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.661964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.661990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.662116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.662262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.662289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.662414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.662548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.662574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.662735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.662910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.662936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.921 qpair failed and we were unable to recover it. 00:30:16.921 [2024-07-10 11:00:33.663059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.921 [2024-07-10 11:00:33.663203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.663229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.663383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.663537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.663563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.663702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.663827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.663853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.664015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.664143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.664169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.664323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.664445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.664472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.664604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.664723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.664748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.664913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.665029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.665055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.665187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.665317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.665342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.665471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.665632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.665658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.665787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.665947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.665973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.666104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.666228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.666256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.666391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.666518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.666549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.666708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.666834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.666861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.666987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.667136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.667162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.667286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.667414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.667446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.667584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.667708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.667734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.667859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.668019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.668046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.668198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.668345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.668371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.668519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.668636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.668662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.668802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.668932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.668957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.669080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.669245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.669271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.669391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.669551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.669579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.669740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.669890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.669916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.670066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.670183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.670210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.670366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.670516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.670544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.670664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.670782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.670809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.922 [2024-07-10 11:00:33.670952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.671070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.922 [2024-07-10 11:00:33.671097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.922 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.671226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.671380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.671407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.671541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.671655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.671682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.671833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.671990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.672016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.672134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.672256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.672283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.672431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.672587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.672613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.672741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.672867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.672893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.673021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.673158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.673185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.673301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.673448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.673475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.673600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.673734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.673761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.673907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.674037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.674063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.674192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.674318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.674345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.674502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.674686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.674712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.674846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.675001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.675027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.675161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.675327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.675352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.675505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.675655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.675681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.675816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.675989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.676015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.676134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.676260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.676288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.676417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.676570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.676597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.676746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.676894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.676921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.677079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.677218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.677244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.677365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.677510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.677537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.677659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.677779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.677807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.677924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.678052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.678078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.678227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.678363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.678390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.678521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.678671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.923 [2024-07-10 11:00:33.678697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.923 qpair failed and we were unable to recover it. 00:30:16.923 [2024-07-10 11:00:33.678863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.679031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.679058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.679207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.679394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.679420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.679557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.679670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.679696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.679850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.679982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.680008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.680155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.680305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.680331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.680479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.680603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.680630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.680808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.680947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.680973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.681128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.681257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.681283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.681441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.681574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.681600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.681748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.681900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.681926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.682047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.682170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.682212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.682337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.682489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.682516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.682646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.682795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.682821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.682954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.683082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.683108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.683238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.683370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.683397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.683536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.683719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.683745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.683881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.684066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.684092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.684224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.684353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.684378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.684506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.684628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.684654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.684806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.684936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.684962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.685082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.685230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.685260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.685413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.685552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.685578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.685704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.685855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.685882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.686003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.686125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.686151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.686301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.686446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.686474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.686597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.686744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.686771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.686896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.687008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.687034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.687185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.687299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.687325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.687480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.687626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.687652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.924 qpair failed and we were unable to recover it. 00:30:16.924 [2024-07-10 11:00:33.687787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.924 [2024-07-10 11:00:33.687900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.687926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.688087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.688235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.688261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.688386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.688514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.688541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.688669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.688823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.688849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.689000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.689119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.689145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.689271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.689388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.689414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.689550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.689675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.689701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.689831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.689970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.690003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.690176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.690312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.690340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.690496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.690643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.690669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.690830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.690964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.690991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.691130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.691299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.691326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.691479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.691636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.691663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.691788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.691933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.691972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.692101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.692255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.692291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.692438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.692597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.692625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.692768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.692919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.692946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.693101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.693223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.693250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.693398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.693527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.693556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.693718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.693882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.925 [2024-07-10 11:00:33.693911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:16.925 qpair failed and we were unable to recover it. 00:30:16.925 [2024-07-10 11:00:33.694046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.694200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.694227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.204 qpair failed and we were unable to recover it. 00:30:17.204 [2024-07-10 11:00:33.694342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.694477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.694505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.204 qpair failed and we were unable to recover it. 00:30:17.204 [2024-07-10 11:00:33.694627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.694750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.694777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.204 qpair failed and we were unable to recover it. 00:30:17.204 [2024-07-10 11:00:33.694981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.695123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.695157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.204 qpair failed and we were unable to recover it. 00:30:17.204 [2024-07-10 11:00:33.695294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.695458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.695497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.204 qpair failed and we were unable to recover it. 00:30:17.204 [2024-07-10 11:00:33.695642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.695798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.695838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.204 qpair failed and we were unable to recover it. 00:30:17.204 [2024-07-10 11:00:33.696008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.696150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.696189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.204 qpair failed and we were unable to recover it. 00:30:17.204 [2024-07-10 11:00:33.696333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.696531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.696570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.204 qpair failed and we were unable to recover it. 00:30:17.204 [2024-07-10 11:00:33.696709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.696851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.204 [2024-07-10 11:00:33.696888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.204 qpair failed and we were unable to recover it. 00:30:17.204 [2024-07-10 11:00:33.697070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.697216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.697253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.697404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.697575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.697602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.697727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.697861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.697887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.698049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.698188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.698216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.698339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.698475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.698504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.698631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.698780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.698807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.698960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.699081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.699108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.699279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.699417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.699454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.699586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.699747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.699774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.699893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.700022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.700048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.700191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.700317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.700343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.700474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.700595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.700622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.700738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.700863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.700890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.701015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.701190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.701221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.701358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.701521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.701548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.701701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.701816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.701843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.701974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.702131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.702159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.702293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.702419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.702454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.702579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.702700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.702726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.702876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.703024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.703050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.703172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.703327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.703355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.703510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.703635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.703663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.703810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.703948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.703975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.704121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.704233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.704260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.704432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.704564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.704590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.704743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.704906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.704932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.205 [2024-07-10 11:00:33.705051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.705176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.205 [2024-07-10 11:00:33.705202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.205 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.705351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.705473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.705500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.705656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.705804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.705831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.705950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.706076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.706102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.706218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.706342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.706368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.706513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.706638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.706664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.706784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.706917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.706943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.707076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.707259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.707286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.707437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.707562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.707589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.707727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.707881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.707908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.708058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.708186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.708214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.708345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.708500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.708527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.708674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.708840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.708866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.708985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.709123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.709150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.709303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.709449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.709476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.709605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.709734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.709760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.709913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.710034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.710059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.710236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.710365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.710391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.710532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.710682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.710709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.710862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.710979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.711006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.711157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.711288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.711314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.711473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.711614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.711641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.711761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.711917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.711943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.712099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.712218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.712243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.712367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.712493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.712520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.712640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.712758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.712784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.712956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.713120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.713146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.206 [2024-07-10 11:00:33.713278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.713398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.206 [2024-07-10 11:00:33.713438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.206 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.713575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.713703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.713731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.713879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.714040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.714067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.714183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.714313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.714339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.714480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.714649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.714675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.714807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.714958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.714984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.715138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.715299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.715325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.715452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.715608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.715635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.715789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.715916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.715943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.716077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.716229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.716255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.716372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.716522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.716549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.716673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.716793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.716823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.716955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.717069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.717095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.717286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.717418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.717457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.717589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.717744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.717770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.717898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.718055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.718081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.718198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.718314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.718340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.718488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.718604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.718630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.718756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.718886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.718912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.719044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.719176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.719203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.719355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.719480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.719507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.719655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.719809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.719835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.719969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.720117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.720143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.720258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.720395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.720421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.720556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.720712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.720738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.720872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.721012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.721038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.721158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.721306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.721331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.207 qpair failed and we were unable to recover it. 00:30:17.207 [2024-07-10 11:00:33.721464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.721586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.207 [2024-07-10 11:00:33.721612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.721742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.721921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.721947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.722091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.722223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.722250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.722397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.722586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.722613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.722771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.722911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.722937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.723070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.723224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.723250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.723375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.723516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.723544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.723678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.723839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.723866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.724004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.724155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.724181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.724329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.724478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.724504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.724637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.724769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.724795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.724957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.725100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.725126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.725275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.725421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.725452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.725579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.725711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.725736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.725852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.725967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.725993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.726119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.726236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.726262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.726381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.726535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.726561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.726685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.726803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.726828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.726960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.727088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.727114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.727263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.727416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.727449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.727594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.727744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.727770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.727935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.728083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.728109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.728235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.728382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.728408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.728538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.728664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.728690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.728820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.728953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.728978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.729131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.729248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.729274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.729395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.729548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.208 [2024-07-10 11:00:33.729575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.208 qpair failed and we were unable to recover it. 00:30:17.208 [2024-07-10 11:00:33.729728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.729878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.729904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.730030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.730147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.730173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.730318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.730449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.730476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.730636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.730767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.730793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.730917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.731073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.731099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.731257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.731376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.731402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.731536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.731656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.731682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.731807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.731956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.731981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.732138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.732290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.732320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.732462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.732585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.732610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.732726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.732881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.732907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.733029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.733166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.733192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.733341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.733491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.733516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.733668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.733795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.733820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.733948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.734070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.734096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.734217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.734351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.734376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.734502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.734635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.734663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.734845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.734974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.735000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.735120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.735249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.735278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.735437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.735564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.735592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.735722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.735871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.735897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.209 qpair failed and we were unable to recover it. 00:30:17.209 [2024-07-10 11:00:33.736028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.209 [2024-07-10 11:00:33.736148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.736174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.736296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.736415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.736446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.736585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.736722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.736748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.736884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.737039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.737065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.737183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.737340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.737365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.737515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.737642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.737669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.737814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.737946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.737972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.738153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.738273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.738299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.738449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.738579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.738605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.738730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.738847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.738873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.739014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.739180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.739206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.739328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.739486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.739520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.739689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.739834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.739860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.739981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.740132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.740158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.740313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.740435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.740462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.740616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.740768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.740794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.740909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.741025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.741051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.741206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.741320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.741345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.741470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.741606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.741632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.741774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.741905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.741931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.742056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.742184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.742210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.742327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.742480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.742507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.742654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.742777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.742802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.742931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.743054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.743079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.743228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.743408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.743440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.743565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.743712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.743738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.743874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.743987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.210 [2024-07-10 11:00:33.744013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.210 qpair failed and we were unable to recover it. 00:30:17.210 [2024-07-10 11:00:33.744144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.744269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.744296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.744436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.744595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.744622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.744779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.744932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.744958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.745097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.745225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.745252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.745417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.745549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.745575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.745733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.745850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.745875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.746033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.746200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.746226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.746391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.746515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.746541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.746667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.746827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.746853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.747006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.747135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.747161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.747282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.747452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.747488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.747640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.747798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.747825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.747990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.748113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.748140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.748288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.748416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.748450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.748573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.748702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.748728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.748880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.749004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.749030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.749154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.749303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.749329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.749479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.749611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.749638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.749771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.749889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.749915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.750051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.750197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.750223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.750371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.750484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.750510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.750628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.750787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.750817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.750972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.751121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.751148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.751299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.751450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.751489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.751661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.751827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.751853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.751984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.752125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.752151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.752304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.752441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.211 [2024-07-10 11:00:33.752469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.211 qpair failed and we were unable to recover it. 00:30:17.211 [2024-07-10 11:00:33.752634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.752772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.752800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.752925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.753064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.753090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.753228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.753387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.753415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.753559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.753689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.753715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.753866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.753998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.754024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.754184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.754307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.754333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.754495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.754621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.754649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.754803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.754923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.754949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.755115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.755247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.755273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.755398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.755530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.755561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.755698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.755890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.755915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.756064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.756191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.756217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.756365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.756514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.756542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.756683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.756808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.756833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.756997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.757149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.757174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.757342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.757467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.757494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.757624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.757746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.757771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.757915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.758077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.758103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.758255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.758370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.758397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.758543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.758666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.758692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.758859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.758976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.759001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.759153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.759279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.759305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.759435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.759570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.759596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.759719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.759881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.759906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.760029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.760147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.760172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.760313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.760438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.760465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.760591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.760744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.212 [2024-07-10 11:00:33.760770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.212 qpair failed and we were unable to recover it. 00:30:17.212 [2024-07-10 11:00:33.760893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.761016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.761041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.761167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.761334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.761360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.761499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.761621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.761647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.761809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.761931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.761959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.762078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.762229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.762255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.762417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.762592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.762618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.762740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.762892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.762918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.763082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.763236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.763262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.763382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.763542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.763568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.763722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.763840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.763866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.763983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.764106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.764131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.764276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.764435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.764471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.764621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.764733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.764759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.764898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.765052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.765077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.765226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.765408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.765440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.765590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.765754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.765780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.765903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.766054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.766080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.766200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.766330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.766356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.766504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.766652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.766682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.766808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.766931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.766956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.767078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.767224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.767249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.767377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.767516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.767543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.767664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.767833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.767859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.768011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.768157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.768182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.768331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.768485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.768512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.768642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.768758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.768783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.768935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.769049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.769075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.769212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.769367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.213 [2024-07-10 11:00:33.769393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.213 qpair failed and we were unable to recover it. 00:30:17.213 [2024-07-10 11:00:33.769533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.769650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.769676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.769828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.769941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.769966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.770115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.770242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.770268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.770429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.770558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.770583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.770710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.770827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.770852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.770977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.771101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.771126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.771279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.771413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.771445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.771574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.771698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.771725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.771846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.771997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.772023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.772170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.772356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.772381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.772538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.772660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.772685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.772848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.772981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.773006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.773125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.773260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.773286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.773437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.773591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.773619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.773735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.773861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.773886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.774037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.774199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.774224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.774384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.774521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.774547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.774680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.774827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.774852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.774992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.775115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.775140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.775308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.775432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.775458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.775609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.775740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.775765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.775895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.776047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.776072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.776220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.776345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.776370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.776498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.776617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.776643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.776764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.776880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.776906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.777045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.777190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.777216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.777371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.777525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.777551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.777672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.777799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.777824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.777973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.778121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.778147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.778301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.778420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.778452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.778581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.778703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.778731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.778872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.779039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.779065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.779201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.779317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.779342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.779470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.779621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.779647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.779764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.779897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.779923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.780087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.780267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.780293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.780412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.780577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.780603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.780736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.780864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.780889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.781047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.781195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.781221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.781377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.781504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.781531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.781665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.781815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.781840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.781996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.782121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.782152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.782303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.782434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.782460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.782612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.782764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.782792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.782914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.783034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.783060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.214 [2024-07-10 11:00:33.783193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.783346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.214 [2024-07-10 11:00:33.783371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.214 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.783495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.783629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.783654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.783810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.783958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.783984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.784110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.784237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.784265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.784396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.784554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.784580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.784719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.784876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.784902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.785055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.785187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.785213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.785356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.785490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.785517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.785639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.785755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.785781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.785909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.786039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.786065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.786218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.786341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.786367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.786525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.786678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.786704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.786822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.786969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.786995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.787173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.787316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.787341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.787504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.787630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.787657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.787841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.787989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.788014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.788130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.788258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.788284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.788446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.788574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.788600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.788722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.788838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.788864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.789023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.789148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.789173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.789306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.789485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.789512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.789631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.789752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.789778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.789957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.790121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.790147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.790309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.790462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.790489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.790646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.790795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.790822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.790944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.791065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.791091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.791244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.791364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.791390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.791547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.791665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.791690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.791822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.791936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.791962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.792100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.792246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.792272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.792398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.792518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.792544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.792683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.792798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.792823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.792942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.793083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.793108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.793289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.793408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.793444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.793599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.793743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.793769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.793911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.794066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.794093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.794240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.794359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.794387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.794539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.794676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.794702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.794867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.794991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.795016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.795153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.795267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.795293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.795421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.795578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.795604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.795729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.795843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.795869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.796018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.796180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.796206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.796336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.796483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.796510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.796639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.796761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.796786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.796948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.797095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.797121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.797249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.797378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.797405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.797574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.797691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.797721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.797878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.798032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.798059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.798186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.798335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.798362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.798491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.798625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.798652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.798780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.798930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.798955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.215 qpair failed and we were unable to recover it. 00:30:17.215 [2024-07-10 11:00:33.799076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.215 [2024-07-10 11:00:33.799202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.799229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.799394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.799562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.799588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.799720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.799853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.799881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.800035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.800159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.800184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.800317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.800472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.800499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.800652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.800781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.800811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.800942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.801054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.801080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.801214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.801363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.801389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.801515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.801633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.801658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.801782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.801907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.801933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.802052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.802217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.802242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.802374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.802519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.802545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.802681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.802803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.802829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.803010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.803134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.803161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.803343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.803465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.803492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.803617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.803769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.803794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.803950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.804079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.804105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.804242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.804422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.804455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.804573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.804692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.804719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.804844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.804956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.804982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.805142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.805305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.805331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.805464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.805590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.805616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.805764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.805908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.805934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.806064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.806215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.806241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.806395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.806554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.806580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.806717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.806866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.806892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.807014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.807163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.807188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.807364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.807492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.807519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.807681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.807827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.807853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.807992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.808119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.808145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.808298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.808435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.808462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.808625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.808778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.808803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.808954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.809117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.809143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.809267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.809388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.809415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.809614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.809765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.809791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.809972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.810120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.810146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.810269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.810435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.810464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.810613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.810737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.810763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.810909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.811064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.811090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.811206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.811330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.811355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.811480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.811602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.811628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.811793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.811937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.811962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.812079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.812205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.812230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.812383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.812509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.812535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.812692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.812814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.812839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.812957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.813088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.813113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.216 qpair failed and we were unable to recover it. 00:30:17.216 [2024-07-10 11:00:33.813265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.216 [2024-07-10 11:00:33.813420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.813451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.813573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.813700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.813727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.813845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.813995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.814021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.814133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.814296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.814322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.814453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.814580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.814605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.814735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.814853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.814878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.814995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.815110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.815136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.815263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.815414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.815456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.815609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.815764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.815790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.815908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.816055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.816081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.816243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.816392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.816421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.816552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.816695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.816721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.816878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.817035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.817060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.817213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.817349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.817375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.817524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.817705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.817731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.817854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.818009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.818036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.818181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.818317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.818342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.818467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.818605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.818630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.818751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.818901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.818927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.819085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.819201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.819227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.819361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.819505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.819533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.819666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.819842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.819867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.819992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.820138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.820163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.820292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.820406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.820437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.820555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.820686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.820713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.820851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.820987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.821012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.821132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.821279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.821305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.821442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.821562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.821588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.821714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.821863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.821888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.822010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.822198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.822224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.822340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.822497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.822524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.822658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.822810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.822835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.822985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.823115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.823141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.823267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.823418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.823448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.823576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.823704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.823731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.823849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.823965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.823990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.824148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.824263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.824289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.824450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.824566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.824592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.824751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.824898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.824923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.825058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.825180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.825207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.825335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.825457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.825484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.825621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.825804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.825830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.825963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.826083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.826108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.826260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.826419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.826452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.826609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.826730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.826760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.826928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.827064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.827089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.827217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.827364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.827390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.827533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.827682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.827708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.827864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.828010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.828035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.828181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.828315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.828340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.828490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.828639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.828665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.217 [2024-07-10 11:00:33.828820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.828945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.217 [2024-07-10 11:00:33.828971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.217 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.829162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.829284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.829309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.829475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.829624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.829649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.829776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.829903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.829930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.830088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.830235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.830261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.830374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.830533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.830559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.830719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.830860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.830886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.831002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.831150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.831176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.831294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.831413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.831444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.831597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.831745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.831771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.831902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.832054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.832083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.832245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.832375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.832399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.832528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.832653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.832678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.832799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.832936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.832962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.833099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.833252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.833277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.833437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.833556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.833582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.833708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.833876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.833902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.834081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.834195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.834221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.834344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.834464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.834490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.834618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.834763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.834789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.834939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.835089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.835115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.835278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.835405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.835444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.835581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.835699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.835724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.835843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.836018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.836043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.836214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.836332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.836357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.836498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.836620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.836647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.836806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.836957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.836983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.837110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.837226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.837251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.837406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.837559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.837585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.837701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.837861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.837887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.838018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.838171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.838197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.838356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.838499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.838526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.838650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.838770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.838795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.838950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.839102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.839128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.839280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.839409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.839440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.839560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.839682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.839709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.839835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.839986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.840012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.840168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.840319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.840345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.840504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.840682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.840708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.840856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.840982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.841007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.841127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.841271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.841296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.841442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.841592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.841618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.841773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.841935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.841961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.842119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.842237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.842263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.842383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.842543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.842570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.842718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.842867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.842893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.843057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.843172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.843198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.843353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.843476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.843503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.843668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.843782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.843807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.843973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.844094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.844119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.844265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.844404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.844435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.844567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.844720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.844747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.218 qpair failed and we were unable to recover it. 00:30:17.218 [2024-07-10 11:00:33.844886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.218 [2024-07-10 11:00:33.845034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.845060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.845211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.845327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.845352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.845476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.845624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.845650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.845798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.845956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.845981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.846098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.846228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.846255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.846410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.846557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.846583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.846703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.846825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.846851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.847018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.847132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.847158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.847281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.847432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.847458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.847606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.847760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.847791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.847959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.848088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.848114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.848236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.848391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.848416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.848602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.848734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.848759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.848911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.849085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.849110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.849250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.849369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.849395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.849535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.849668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.849693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.849843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.849998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.850023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.850177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.850294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.850319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.850476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.850600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.850626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.850748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.850884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.850914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.851037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.851183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.851209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.851339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.851494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.851521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.851651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.851777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.851802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.851969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.852105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.852131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.852285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.852413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.852444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.852598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.852745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.852770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.852887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.853033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.853059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.853192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.853315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.853342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.853525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.853653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.853679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.853829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.853940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.853966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.854119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.854277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.854303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.854469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.854618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.854644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.854769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.854890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.854916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.855047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.855196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.855222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.855350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.855477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.855503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.855633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.855754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.855779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.855947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.856063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.856088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.856222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.856346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.856371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.856522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.856683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.856709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.856839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.856989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.857014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.857144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.857272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.857298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.857458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.857612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.857638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.857753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.857875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.857900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.858057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.858186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.858211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.858334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.858455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.858481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.858638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.858751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.858776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.219 [2024-07-10 11:00:33.858912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.859091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.219 [2024-07-10 11:00:33.859117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.219 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.859239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.859365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.859390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.859516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.859634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.859659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.859787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.859907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.859932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.860056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.860182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.860208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.860334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.860455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.860483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.860612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.860729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.860755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.860902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.861065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.861090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.861246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.861368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.861393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.861547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.861673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.861700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.861852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.862023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.862049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.862169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.862322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.862352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.862480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.862640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.862666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.862820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.862949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.862977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.863098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.863232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.863258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.863387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.863544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.863571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.863687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.863846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.863871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.864022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.864155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.864181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.864298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.864428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.864455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.864580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.864699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.864724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.864847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.864974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.865001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.865136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.865256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.865282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.865401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.865551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.865577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.865732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.865860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.865886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.866024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.866136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.866169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.866317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.866487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.866516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.866652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.866811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.866837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.866959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.867087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.867112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.867255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.867419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.867450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.867596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.867774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.867800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.867949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.868125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.868152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.868287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.868413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.868445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.868576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.868695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.868721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.868870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.868990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.869016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.869168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.869299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.869326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.869481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.869641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.869667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.869827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.869975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.870001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.870133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.870288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.870314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.870440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.870563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.870588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.870709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.870835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.870862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.871020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.871162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.871189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.871310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.871430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.871456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.871574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.871696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.871725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.871846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.871994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.872019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.872163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.872312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.872338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.872490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.872611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.872638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.872756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.872902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.872929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.873062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.873178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.873204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.873321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.873452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.873478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.873600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.873750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.873778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.873902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.874053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.874079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.874197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.874313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.220 [2024-07-10 11:00:33.874339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.220 qpair failed and we were unable to recover it. 00:30:17.220 [2024-07-10 11:00:33.874493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.874617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.874643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.874812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.874964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.874991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.875118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.875238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.875264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.875385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.875518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.875544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.875671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.875791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.875817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.875976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.876104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.876131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.876263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.876383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.876409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.876542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.876670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.876695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.876849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.877001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.877027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.877187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.877323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.877349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.877474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.877625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.877651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.877774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.877922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.877949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.878071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.878203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.878229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.878355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.878510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.878536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.878658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.878783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.878809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.878942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.879116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.879142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.879290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.879440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.879467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.879612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.879755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.879781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.879915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.880062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.880088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.880238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.880393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.880419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.880562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.880692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.880718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.880838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.881017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.881043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.881188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.881318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.881346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.881498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.881623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.881653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.881791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.881951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.881977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.882125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.882268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.882295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.882443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.882589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.882614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.882768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.882884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.882911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.883057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.883206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.883232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.883354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.883487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.883513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.883643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.883798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.883824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.884006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.884129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.884155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.884286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.884461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.884497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.884620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.884774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.884800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.884956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.885103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.885129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.885258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.885412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.885443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.885572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.885690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.885716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.885867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.885982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.886008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.886189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.886308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.886334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.886486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.886605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.886631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.886760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.886907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.886933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.887092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.887244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.887269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.887404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.887566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.887591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.887722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.887874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.887900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.888040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.888162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.888189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.888319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.888444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.888471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.888599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.888757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.888783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.888964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.889085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.889111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.889232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.889378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.889405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.889557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.889678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.889704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.889832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.889962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.889988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.221 [2024-07-10 11:00:33.890118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.890278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.221 [2024-07-10 11:00:33.890304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.221 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.890437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.890558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.890584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.890701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.890860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.890886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.891013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.891127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.891153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.891311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.891459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.891495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.891618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.891749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.891774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.891896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.892026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.892052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.892178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.892308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.892334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.892479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.892624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.892649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.892775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.892902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.892928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.893087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.893211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.893239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.893398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.893561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.893587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.893704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.893884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.893910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.894029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.894221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.894247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.894384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.894550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.894575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.894739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.894892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.894918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.895069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.895186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.895212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.895336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.895459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.895487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.895613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.895767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.895794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.895924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.896044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.896070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.896187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.896309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.896335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.896493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.896620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.896646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.896808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.896960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.896986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.897131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.897253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.897285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.897480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.897600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.897625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.897750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.897909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.897936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.898055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.898236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.898262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.898411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.898539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.898565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.898718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.898845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.898871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.898994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.899111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.899137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.899265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.899413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.899452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.899584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.899706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.899731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.899884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.900032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.900057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.900235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.900351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.900377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.900562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.900680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.900705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.900864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.900995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.901021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.901138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.901283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.901309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.901439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.901600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.901626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.901771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.901889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.901915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.902070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.902247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.902273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.902399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.902556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.902582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.902733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.902846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.902871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.903021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.903143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.903169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.903287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.903405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.903436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.903598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.903715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.903741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.903921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.904062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.904089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.222 [2024-07-10 11:00:33.904238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.904352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.222 [2024-07-10 11:00:33.904378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.222 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.904514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.904672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.904703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.904834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.905015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.905041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.905169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.905299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.905326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.905515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.905638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.905663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.905809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.905930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.905956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.906070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.906186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.906213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.906329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.906447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.906479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.906606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.906744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.906770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.906900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.907075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.907101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.907250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.907406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.907436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.907564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.907687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.907723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.907842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.907988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.908014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.908176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.908328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.908354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.908479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.908630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.908656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.908781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.908924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.908950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.909105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.909235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.909261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.909411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.909567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.909595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.909759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.909879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.909905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.910066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.910218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.910244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.910360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.910485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.910512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.910647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.910769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.910797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.910927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.911084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.911111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.911294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.911445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.911477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.911606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.911752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.911778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.911906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.912024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.912051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.912176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.912341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.912367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.912501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.912650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.912676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.912852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.913020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.913053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.913172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.913283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.913309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.913474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.913620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.913646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.913773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.913889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.913915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.914066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.914187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.914215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.914363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.914523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.914550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.914715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.914865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.914893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.915074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.915197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.915224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.915374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.915520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.915546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.915696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.915821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.915847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.916036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.916166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.916197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.916318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.916477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.916504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.916635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.916775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.916802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.916952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.917110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.917138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.917284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.917437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.917463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.917585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.917722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.917748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.917905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.918069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.918095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.918230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.918345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.918371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.918532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.918670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.918696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.918846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.918976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.919002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.919151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.919270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.919295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.919421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.919556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.919583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.919726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.919880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.919906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.920056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.920185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.920212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.223 qpair failed and we were unable to recover it. 00:30:17.223 [2024-07-10 11:00:33.920361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.223 [2024-07-10 11:00:33.920494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.920520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.920678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.920799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.920825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.920948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.921078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.921104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.921250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.921451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.921478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.921597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.921735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.921761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.921915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.922038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.922064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.922215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.922332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.922359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.922493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.922628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.922655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.922833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.922962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.922989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.923138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.923261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.923287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.923434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.923559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.923585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.923747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.923927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.923954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.924073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.924226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.924253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.924404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.924560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.924586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.924749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.924895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.924921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.925040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.925158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.925184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.925330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.925464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.925490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.925621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.925755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.925781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.925909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.926039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.926065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.926188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.926327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.926354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.926482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.926612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.926639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.926766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.926928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.926954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.927071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.927193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.927218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.927341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.927488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.927515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.927661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.927794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.927820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.927944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.928064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.928089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.928235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.928352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.928378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.928511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.928669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.928695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.928826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.928969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.928995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.929171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.929314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.929339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.929489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.929628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.929654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.929776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.929919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.929945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.930122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.930258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.930284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.930413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.930595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.930621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.930750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.930895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.930921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.931070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.931191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.931217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.931348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.931494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.931521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.931677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.931826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.931856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.932009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.932161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.932187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.932307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.932492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.932520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.932640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.932771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.932797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.932960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.933079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.933105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.933224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.933385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.933411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.933537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.933655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.933681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.933840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.933998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.934024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.934173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.934290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.934316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.934468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.934583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.224 [2024-07-10 11:00:33.934609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.224 qpair failed and we were unable to recover it. 00:30:17.224 [2024-07-10 11:00:33.934761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.934878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.934904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.935062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.935181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.935207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.935340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.935459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.935486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.935635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.935755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.935782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.935901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.936057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.936083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.936219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.936344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.936372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.936505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.936666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.936692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.936843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.936992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.937018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.937140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.937266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.937292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.937418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.937580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.937607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.937755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.937907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.937933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.938093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.938215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.938242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.938374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.938500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.938527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.938661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.938807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.938833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.938987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.939123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.939149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.939274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.939435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.939462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.939614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.939759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.939786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.939904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.940055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.940081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.940214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.940328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.940354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.940489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.940619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.940645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.940781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.940908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.940936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.941093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.941212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.941238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.941358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.941494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.941521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.941677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.941808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.941834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.941956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.942097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.942122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.942251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.942378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.942404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.942547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.942680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.942705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.942825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.942939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.942965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.943109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.943246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.943272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.943419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.943556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.943582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.943708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.943858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.943884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.944018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.944202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.944228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.944345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.944485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.944512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.944641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.944785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.944810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.944940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.945087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.945112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.945240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.945359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.945385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.945552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.945680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.945706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.945824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.945972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.945998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.946133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.946314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.946340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.946461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.946599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.946625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.946772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.946923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.946949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.947087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.947252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.947282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.947444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.947563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.947590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.947739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.947869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.947896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.948049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.948174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.948200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.948326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.948508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.948551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.948673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.948840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.948867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.948994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.949144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.949171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.949339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.949463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.949490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.949638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.949759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.949785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.949943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.950070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.950096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.225 [2024-07-10 11:00:33.950247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.950408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.225 [2024-07-10 11:00:33.950440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.225 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.950572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.950736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.950763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.950912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.951058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.951084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.951214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.951341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.951368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.951527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.951648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.951674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.951822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.951958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.951984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.952136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.952263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.952289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.952416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.952661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.952687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.952842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.952965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.952992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.953176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.953290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.953317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.953439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.953568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.953594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.953719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.953856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.953882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.953999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.954151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.954177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.954293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.954475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.954502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.954628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.954803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.954828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.954977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.955103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.955129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.955361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.955519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.955546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.955692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.955820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.955846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.955994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.956128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.956154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.956385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.956518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.956545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.956699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.956830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.956856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.956981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.957139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.957165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.957295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.957458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.957486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.957604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.957764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.957790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.957937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.958063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.958091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.958207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.958345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.958371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.958497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.958658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.958685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.958807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.958929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.958956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.959105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.959336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.959362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.959515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.959642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.959669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.959787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.959932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.959959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.960107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.960238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.960265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.960385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.960557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.960584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.960713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.960868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.960894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.961026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.961191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.961217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.961371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.961505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.961532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.961663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.961779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.961805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.961956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.962074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.962101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.962222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.962349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.962375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.962504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.962638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.962664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.962794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.962946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.962973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.963104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.963236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.963267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.963385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.963549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.963576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.963699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.963864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.963890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.964055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.964174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.964201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.964320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.964455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.964482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.964632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.964777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.964803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.964957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.965085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.965111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.965273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.965455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.965483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.965606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.965754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.965781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.965931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.966060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.966086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.226 qpair failed and we were unable to recover it. 00:30:17.226 [2024-07-10 11:00:33.966207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.226 [2024-07-10 11:00:33.966351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.966381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.966539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.966662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.966689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.966847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.966968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.966994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.967113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.967232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.967258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.967390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.967516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.967544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.967683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.967834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.967860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.968039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.968196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.968223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.968371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.968494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.968521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.968677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.968812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.968838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.968960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.969079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.969105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.969242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.969358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.969383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.969523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.969671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.969697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.969823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.969968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.969994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.970171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.970335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.970362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.970509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.970638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.970663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.970790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.970909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.970935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.971056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.971168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.971194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.971360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.971504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.971530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.971668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.971794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.971820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.971938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.972120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.972145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.972274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.972398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.972431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.972627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.972746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.972772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.972926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.973049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.973075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.973213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.973362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.973388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.973542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.973697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.973723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.973875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.973998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.974024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.974147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.974302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.974329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.974483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.974608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.974633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.974787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.974916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.974944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.975094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.975257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.975283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.975404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.975642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.975667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.975907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.976027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.976053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.976204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.976330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.976356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.976479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.976597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.976623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.976778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.976943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.976969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.977082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.977229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.977255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.977412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.977546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.977573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.977706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.977854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.977879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.978010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.978126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.978153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.978301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.978423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.978456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.978635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.978754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.978781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.978941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.979081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.979108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.979262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.979380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.979407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.979546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.979664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.979690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.979840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.979996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.980022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.980181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.980304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.980330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.980494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.980621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.227 [2024-07-10 11:00:33.980650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.227 qpair failed and we were unable to recover it. 00:30:17.227 [2024-07-10 11:00:33.980806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.980966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.980992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.981118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.981266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.981292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.981474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.981592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.981619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.981740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.981855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.981881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.982013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.982189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.982219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.982371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.982519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.982546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.982670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.982837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.982862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.983019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.983166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.983192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.983319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.983445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.983473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.983636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.983761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.983789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.983970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.984103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.984129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.984296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.984450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.984477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.984661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.984812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.984839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.985000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.985146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.985172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.985305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.985470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.985497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.985636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.985788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.985814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.985961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.986139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.986165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.986292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.986412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.986443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.986570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.986733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.986759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.986908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.987033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.987059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.987204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.987353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.987379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.987504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.987634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.987660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.987819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.987983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.988009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.988126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.988271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.988297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.988449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.988579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.988606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.988770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.988894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.988920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.989083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.989230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.989256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.989406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.989559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.989585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.989735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.989852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.989878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.990003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.990129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.990155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.990276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.990431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.990458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.990600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.990730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.990756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.990881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.991009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.991035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.991168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.991295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.991323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.991451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.991571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.991598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.991768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.991893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.991921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.992043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.992189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.992215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.992338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.992482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.992509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.992663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.992786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.992811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.992961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.993092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.993118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.993245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.993378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.993404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.993548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.993696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.993722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.993854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.994036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.994061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.994212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.994363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.994388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.994524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.994649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.994675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.994822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.994987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.995013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.995166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.995303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.995329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.995463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.995592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.995618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.995771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.995949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.995975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.996161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.996314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.996339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.996468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.996592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.996620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.228 qpair failed and we were unable to recover it. 00:30:17.228 [2024-07-10 11:00:33.996787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.228 [2024-07-10 11:00:33.996923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.996949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:33.997102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.997253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.997279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:33.997414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.997534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.997560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:33.997704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.997853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.997878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:33.998026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.998173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.998202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:33.998338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.998475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.998501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:33.998629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.998781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.998806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:33.998955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.999079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.999104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:33.999232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.999380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.999405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:33.999539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.999681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:33.999707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:33.999858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.000005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.000031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.000178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.000353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.000379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.000513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.000639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.000665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.000816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.000933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.000959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.001096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.001258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.001284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.001447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.001567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.001593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.001736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.001862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.001888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.002001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.002123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.002149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.002295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.002445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.002480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.002628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.002793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.002818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.002964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.003119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.003145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.003268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.003381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.003406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.003554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.003684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.003710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.003858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.003980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.004006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.004139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.004257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.004282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.004461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.004634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.004663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.004825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.004971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.004997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.005127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.005266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.005292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.005449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.005614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.005640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.005768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.005918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.005944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.006096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.006257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.006282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.006432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.006601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.006637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.006803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.006935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.006967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.007152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.007306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.007333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.007470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.007638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.007664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.007814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.007982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.008010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.008146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.008315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.008342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.229 [2024-07-10 11:00:34.008483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.008611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.229 [2024-07-10 11:00:34.008637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.229 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.008804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.008943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.008969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.009091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.009218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.009245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.009375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.009521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.009549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.009695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.009821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.009847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.009997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.010161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.010187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.010342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.010470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.010498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.010678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.010830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.010856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.010999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.011115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.011141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.011261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.011389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.011414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.011548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.011700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.011726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.011850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.011974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.012002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.012160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.012310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.012336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.012508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.012663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.012690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.012827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.012951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.012979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.013138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.013295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.013321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.013505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.013631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.013657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.498 [2024-07-10 11:00:34.013807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.013936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.498 [2024-07-10 11:00:34.013962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.498 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.014096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.014220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.014246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.014370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.014539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.014566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.014733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.014858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.014884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.015008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.015166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.015193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.015340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.015494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.015522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.015661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.015798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.015824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.015985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.016154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.016180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.016311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.016462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.016490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.016621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.016771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.016796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.016950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.017100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.017125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.017258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.017398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.017429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.017557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.017735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.017761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.017882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.018037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.018063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.018188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.018307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.018333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.018464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.018593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.018620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.018740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.018867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.018892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.019026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.019158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.019184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.019346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.019476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.019502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.019668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.019792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.019818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.019950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.020098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.020124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.020304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.020462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.020490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.020632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.020763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.020789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.020913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.021073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.021099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.021280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.021439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.499 [2024-07-10 11:00:34.021467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.499 qpair failed and we were unable to recover it. 00:30:17.499 [2024-07-10 11:00:34.021591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.021727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.021755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.021881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.022035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.022062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.022213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.022350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.022376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.022516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.022669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.022696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.022863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.022986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.023013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.023170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.023320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.023346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.023471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.023627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.023657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.023838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.023961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.023988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.024110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.024256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.024282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.024467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.024614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.024640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.024766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.024906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.024931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.025067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.025198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.025224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2ea8000b90 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.025389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.025530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.025558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.025696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.025845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.025871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.026003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.026173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.026199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.026336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.026489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.026516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.026667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.026788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.026819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.026974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.027101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.027126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.027276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.027457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.027483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.027635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.027756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.027781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.027909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.028064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.028090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.028215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.028328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.028353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.028509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.028634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.028660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.500 qpair failed and we were unable to recover it. 00:30:17.500 [2024-07-10 11:00:34.028826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.028954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.500 [2024-07-10 11:00:34.028979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.029106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.029235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.029262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.029408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.029548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.029574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.029707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.029862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.029888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.030044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.030177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.030202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.030332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.030508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.030534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.030680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.030829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.030855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.030979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.031130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.031156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.031335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.031467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.031493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.031660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.031789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.031814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.031932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.032084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.032109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.032262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.032378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.032404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.032569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.032685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.032710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.032861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.032978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.033003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.033157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.033279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.033304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.033436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.033582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.033607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.033763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.033894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.033919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.034043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.034169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.034195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.034328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.034488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.034514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.034650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.034798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.034823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.034978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.035126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.035152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.035270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.035417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.035448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.035587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.035720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.035745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.501 [2024-07-10 11:00:34.035865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.036007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.501 [2024-07-10 11:00:34.036032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.501 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.036203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.036368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.036393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.036533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.036697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.036723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.036845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.036991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.037017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.037147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.037292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.037317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.037443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.037575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.037600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.037726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.037843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.037869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.037991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.038151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.038177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.038297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.038432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.038459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.038618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.038745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.038771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.038913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.039032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.039058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.039187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.039343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.039369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.039499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.039651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.039676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.039833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.040010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.040036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.040167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.040324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.040350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.040516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.040644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.040669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.040791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.040938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.040963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.041084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.041244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.041271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.041418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.041554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.041580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.041696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.041820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.041845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.041963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.042086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.042111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.042299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.042444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.042474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.042632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.042746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.042772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.042896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.043030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.043055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.043212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.043366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.043392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.043560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.043677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.502 [2024-07-10 11:00:34.043703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.502 qpair failed and we were unable to recover it. 00:30:17.502 [2024-07-10 11:00:34.043822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.043944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.043971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.044106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.044235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.044261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.044384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.044551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.044577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.044699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.044879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.044904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.045023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.045146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.045172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.045333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.045449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.045476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.045635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.045756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.045782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.045908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.046045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.046071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.046216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.046366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.046391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.046524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.046641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.046667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.046796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.046931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.046956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.047071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.047186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.047212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.047330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.047517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.047543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.047671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.047822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.047848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.047977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.048105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.048130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.048312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.048494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.048520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.048690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.048817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.048842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.048961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.049113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.049139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.503 qpair failed and we were unable to recover it. 00:30:17.503 [2024-07-10 11:00:34.049272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.503 [2024-07-10 11:00:34.049398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.049431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.049591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.049728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.049753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.049879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.050045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.050070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.050197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.050327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.050353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.050495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.050611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.050637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.050760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.050871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.050897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.051055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.051180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.051206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.051340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.051463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.051489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.051624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.051741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.051766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.051946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.052108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.052133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.052268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.052447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.052474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.052611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.052740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.052766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.052890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.053006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.053031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.053156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.053274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.053299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.053452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.053634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.053659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.053782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.053934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.053960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.054098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.054260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.054286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.054438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.054561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.054587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.054754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.054904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.054930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.055077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.055194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.055219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.055370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.055553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.055582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.055741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.055889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.055914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.056030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.056184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.056211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.056398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.056530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.504 [2024-07-10 11:00:34.056557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.504 qpair failed and we were unable to recover it. 00:30:17.504 [2024-07-10 11:00:34.056690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.056828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.056864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.056998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.057164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.057191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.057348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.057475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.057503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.057659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.057806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.057831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.057948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.058069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.058098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.058247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.058375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.058400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.058561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.058682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.058708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.058854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.059033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.059059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.059186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.059316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.059341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.059483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.059644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.059669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.059818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.059968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.059993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.060112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.060228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.060253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.060373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.060500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.060526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.060678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.060798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.060823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.060976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.061106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.061136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.061269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.061398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.061429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.061547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.061697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.061724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.061848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.061968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.061994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.062142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.062290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.062316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.062439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.062567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.062592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.062741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.062867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.062892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.063024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.063207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.063233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.063357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.063480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.063506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.063658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.063805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.063830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.063979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.064105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.064132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.064259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.064405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.064436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.505 qpair failed and we were unable to recover it. 00:30:17.505 [2024-07-10 11:00:34.064583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.505 [2024-07-10 11:00:34.064711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.064736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.064886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.065046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.065072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.065224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.065341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.065366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.065500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.065656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.065683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.065820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.065979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.066005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.066135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.066259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.066286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.066442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.066559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.066585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.066717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.066855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.066880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.067008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.067138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.067163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.067315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.067465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.067491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.067621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.067735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.067761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.067910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.068030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.068055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.068203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.068364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.068389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.068569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.068693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.068719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.068852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.068969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.068994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.069155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.069311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.069336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.069487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.069603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.069628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.069810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.069935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.069961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.070084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.070234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.070259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.070417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.070545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.070571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.070707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.070856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.070882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.071000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.071176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.071201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.071336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.071513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.071540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.071673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.071792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.071819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.071940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.072054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.072080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.072199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.072329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.072354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.072474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.072596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.072622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.506 qpair failed and we were unable to recover it. 00:30:17.506 [2024-07-10 11:00:34.072756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.506 [2024-07-10 11:00:34.072911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.072938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.073101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.073216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.073241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.073391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.073577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.073603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.073724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.073863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.073889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.074011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.074143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.074168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.074321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.074465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.074491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.074646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.074779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.074805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.074955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.075101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.075127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.075278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.075403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.075438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.075581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.075698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.075724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.075856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.076008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.076034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.076159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.076336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.076361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.076507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.076632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.076663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.076797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.076923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.076949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.077081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.077223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.077248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.077392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.077520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.077545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.077724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.077851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.077878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.078060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.078210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.078236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.078398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.078584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.078610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.078738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.078888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.078914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.079043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.079189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.079213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.079332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.079480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.079506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.079636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.079766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.079792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.079978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.080134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.080159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.080290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.080467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.080494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.080620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.080750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.080778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.080911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.081037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.081063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.507 qpair failed and we were unable to recover it. 00:30:17.507 [2024-07-10 11:00:34.081184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.507 [2024-07-10 11:00:34.081310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.081336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.081486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.081609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.081635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.081792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.081905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.081931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.082084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.082271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.082296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.082428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.082570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.082595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.082725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.082838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.082863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.083025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.083138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.083163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.083316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.083441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.083468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.083591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.083746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.083771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.083902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.084027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.084054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.084196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.084374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.084400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.084540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.084684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.084710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.084840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.084963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.084989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.085151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.085277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.085302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.085436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.085572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.085598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.085746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.085871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.085897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.086014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.086157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.086183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.086337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.086463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.086489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.086638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.086793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.086818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.086938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.087069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.087094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.087248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.087383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.087408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.087568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.087691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.087717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.087840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.087988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.088013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.508 qpair failed and we were unable to recover it. 00:30:17.508 [2024-07-10 11:00:34.088147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.508 [2024-07-10 11:00:34.088277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.088302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.088431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.088547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.088572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.088690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.088843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.088868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.089018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.089153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.089180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.089305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.089432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.089459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.089583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.089701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.089727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.089906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.090026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.090051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.090166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.090291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.090316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.090442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.090597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.090624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.090779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.090904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.090930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.091050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.091197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.091223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.091343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.091464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.091490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.091611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.091729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.091754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.091883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.092033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.092109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.092265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.092396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.092422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.092555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.092689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.092714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.092865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.093011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.093036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.093183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.093313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.093338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.093460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.093605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.093630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.093757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.093905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.093932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.094046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.094191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.094217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.094342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.094495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.094521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.094703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.094824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.094850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.094970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.095151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.095177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.095300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.095455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.095482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.095607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.095724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.095750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.095879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.096027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.509 [2024-07-10 11:00:34.096052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.509 qpair failed and we were unable to recover it. 00:30:17.509 [2024-07-10 11:00:34.096208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.096325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.096351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.096488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.096637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.096663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.096789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.096937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.096963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.097092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.097250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.097275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.097404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.097557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.097582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.097704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.097825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.097851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.098002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.098119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.098144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.098294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.098440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.098467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.098592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.098710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.098736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.098869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.098999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.099025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.099139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.099271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.099296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.099440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.099591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.099617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.099746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.099878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.099904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.100025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.100153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.100180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.100307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.100422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.100473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.100601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.100728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.100754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.100877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.101030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.101056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.101206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.101355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.101380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.101509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.101656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.101682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.101812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.101967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.101992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.102118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.102245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.102270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.102422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.102551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.102577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.102709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.102859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.102884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.103015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.103161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.103186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.103318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.103477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.103505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.103627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.103776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.103802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.510 qpair failed and we were unable to recover it. 00:30:17.510 [2024-07-10 11:00:34.103922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.104039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.510 [2024-07-10 11:00:34.104064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.104193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.104326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.104352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.104511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.104626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.104651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.104806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.104940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.104965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.105082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.105196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.105221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.105344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.105489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.105516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.105639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.105832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.105858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.105998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.106151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.106177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.106327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.106451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.106477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.106600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.106726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.106752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.106897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.107028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.107053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.107243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.107368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.107397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.107540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.107689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.107715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.107897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.108014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.108040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.108189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.108333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.108359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.108483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.108636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.108661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.108785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.108900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.108925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.109078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.109236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.109261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.109379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.109502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.109529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.109712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.109828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.109853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.110005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.110158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.110184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.110304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.110481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.110511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.110650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.110781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.110807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.110957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.111080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.111105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.111257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.111409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.111439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.111589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.111707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.111732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.111860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.111981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.112007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.511 qpair failed and we were unable to recover it. 00:30:17.511 [2024-07-10 11:00:34.112165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.511 [2024-07-10 11:00:34.112295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.112320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.112476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.112603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.112629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.112775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.112900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.112927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.113064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.113187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.113212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.113348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.113479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.113505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.113631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.113762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.113789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.113912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.114062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.114087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.114223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.114403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.114435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.114568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.114695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.114721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.114846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.115001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.115027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.115146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.115297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.115322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.115475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.115633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.115659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.115835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.115955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.115981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.116131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.116279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.116305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.116453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.116596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.116622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.116747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.116894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.116919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.117071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.117238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.117263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.117418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.117539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.117564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.117695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.117811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.117837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.118019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.118130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.118156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.118276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.118409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.118439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.118558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.118722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.118747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.118862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.119011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.119036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.119186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.119300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.119326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.119483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.119601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.119627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.119766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.119896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.119921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.120042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.120191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.120216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.512 qpair failed and we were unable to recover it. 00:30:17.512 [2024-07-10 11:00:34.120397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.120527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.512 [2024-07-10 11:00:34.120553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.120684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.120833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.120858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.121024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.121200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.121226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.121380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.121507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.121533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.121660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.121779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.121804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.121965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.122088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.122113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.122265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.122376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.122401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.122558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.122688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.122713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.122872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.122999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.123024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.123142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.123262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.123289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.123446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.123580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.123606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.123721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.123878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.123903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.124051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.124182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.124208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.124332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.124463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.124489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.124614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.124763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.124788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.124927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.125082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.125109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.125257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.125428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.125455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.125576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.125731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.125756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.125894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.126008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.126037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.126196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.126345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.126370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.126489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.126639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.126664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.126789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.126915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.126940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.127089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.127209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.127234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.513 [2024-07-10 11:00:34.127415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.127549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.513 [2024-07-10 11:00:34.127575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.513 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.127695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.127824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.127849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.128005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.128182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.128208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.128336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.128468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.128495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.128621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.128747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.128773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.128893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.129014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.129042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.129226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.129340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.129366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.129501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.129624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.129649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.129803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.129932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.129957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.130111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.130256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.130282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.130442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.130569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.130594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.130723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.130846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.130872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.130990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.131135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.131161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.131277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.131421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.131451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.131585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.131711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.131736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.131901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.132050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.132076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.132249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.132366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.132391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.132537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.132678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.132704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.132840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.132961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.132986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.133114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.133263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.133289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.133455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.133580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.133606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.133740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.133894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.133921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.134055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.134207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.134232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.134355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.134528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.134554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.134732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.134854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.134879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.134999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.135158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.135183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.135322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.135456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.135482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.514 qpair failed and we were unable to recover it. 00:30:17.514 [2024-07-10 11:00:34.135634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.514 [2024-07-10 11:00:34.135792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.135817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.135943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.136108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.136133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.136251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.136397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.136422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.136559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.136687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.136713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.136866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.136985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.137011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.137169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.137295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.137322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.137480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.137646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.137672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.137853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.137972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.137998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.138113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.138232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.138258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.138412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.138539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.138565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.138701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.138828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.138854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.139016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.139140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.139166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.139315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.139446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.139473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.139604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.139751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.139777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.139900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.140025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.140050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.140175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.140302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.140327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.140502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.140631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.140656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.140779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.140924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.140950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.141084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.141209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.141235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.141359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.141512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.141543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.141673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.141792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.141818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.141966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.142143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.142169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.142315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.142443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.142470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.142629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.142771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.142797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.142930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.143056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.143083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.143236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.143383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.143409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.143549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.143669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.143694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.515 qpair failed and we were unable to recover it. 00:30:17.515 [2024-07-10 11:00:34.143859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.515 [2024-07-10 11:00:34.144012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.144039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.144177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.144302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.144329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.144487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.144647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.144673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.144792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.144941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.144967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.145095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.145256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.145282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.145443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.145572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.145598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.145731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.145880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.145905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.146055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.146208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.146233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.146372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.146503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.146529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.146662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.146797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.146822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.147005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.147128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.147154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.147268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.147497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.147523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.147651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.147765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.147791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.147946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.148077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.148103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.148233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.148351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.148377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.148544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.148691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.148718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.148856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.148978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.149004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.149160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.149283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.149308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.149431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.149612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.149637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.149780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.149900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.149926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.150073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.150225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.150252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.150411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.150568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.150594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.150727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.150849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.150876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.151008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.151123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.151148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.151299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.151451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.151477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.151628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.151756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.151781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.151929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.152054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.152079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.516 qpair failed and we were unable to recover it. 00:30:17.516 [2024-07-10 11:00:34.152240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.152387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.516 [2024-07-10 11:00:34.152412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.152580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.152750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.152776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.152957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.153080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.153107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.153233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.153361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.153387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.153524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.153687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.153713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.153845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.154004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.154030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.154156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.154315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.154342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.154471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.154660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.154686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.154815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.154939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.154965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.155086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.155205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.155232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.155392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.155518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.155545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.155713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.155842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.155868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.156012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.156133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.156158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.156276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.156436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.156462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.156632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.156777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.156802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.156951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.157067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.157092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.157228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.157351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.157380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.157519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.157648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.157674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.157792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.157912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.157939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.158097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.158246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.158271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.158436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.158567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.158594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.158748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.158863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.158889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.159041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.159162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.159188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.159338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.159467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.159494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.159630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.159772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.159798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.159953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.160070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.160097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.160222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.160365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.160396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.517 qpair failed and we were unable to recover it. 00:30:17.517 [2024-07-10 11:00:34.160568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.517 [2024-07-10 11:00:34.160730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.160756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.160871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.161018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.161044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.161167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.161290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.161317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.161468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.161592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.161618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.161746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.161860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.161885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.162017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.162184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.162209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.162334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.162466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.162492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.162618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.162780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.162806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.162928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.163074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.163100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.163221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.163397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.163422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.163607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.163797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.163823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.163950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.164066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.164092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.164270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.164387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.164413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.164569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.164715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.164740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.164867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.165011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.165038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.165216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.165336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.165361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.165487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.165623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.165648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.165807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.165926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.165951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.166126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.166261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.166286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.166476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.166605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.166630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.166787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.166961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.166987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.167135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.167295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.167320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.167452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.167583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.167608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.518 [2024-07-10 11:00:34.167732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.167877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.518 [2024-07-10 11:00:34.167902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.518 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.168050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.168203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.168229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.168344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.168490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.168516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.168666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.168801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.168827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.168943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.169118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.169143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.169270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.169437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.169464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.169587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.169714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.169740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.169874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.170016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.170042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.170171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.170313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.170338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.170489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.170627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.170652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.170774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.170919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.170945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.171064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.171191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.171217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.171353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.171501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.171528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.171665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.171823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.171850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.171974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.172090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.172116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.172269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.172395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.172422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.172584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.172703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.172729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.172876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.173022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.173051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.173172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.173289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.173315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.173444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.173571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.173598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.173722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.173850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.173876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.173991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.174138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.174163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.174283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.174442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.174468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.174594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.174714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.174740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.174890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.175010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.175037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.175194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.175314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.175339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.175465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.175590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.175616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.519 [2024-07-10 11:00:34.175735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.175884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.519 [2024-07-10 11:00:34.175914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.519 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.176064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.176242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.176268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.176414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.176572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.176599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.176732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.176849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.176875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.177023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.177180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.177206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.177335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.177491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.177518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.177643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.177767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.177793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.177946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.178095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.178120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.178251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.178362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.178388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.178522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.178637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.178663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.178820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.178968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.178993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.179143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.179269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.179294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.179435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.179559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.179585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.179731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.179862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.179887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.180069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.180245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.180271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.180440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.180603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.180628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.180758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.180871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.180898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.181030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.181150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.181176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.181297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.181412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.181445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.181605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.181762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.181787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.181915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.182039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.182064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.182222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.182373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.182398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.182569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.182692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.182719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.182844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.182990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.183016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.183156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.183327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.183352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.183493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.183652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.183678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.183807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.183921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.183947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.520 [2024-07-10 11:00:34.184096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.184239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.520 [2024-07-10 11:00:34.184264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.520 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.184419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.184561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.184587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.184743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.184891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.184916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.185050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.185193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.185219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.185374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.185527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.185554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.185701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.185851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.185877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.186006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.186156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.186182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.186303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.186430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.186456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.186584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.186764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.186790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.186941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.187065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.187090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.187211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.187352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.187378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.187544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.187675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.187702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.187885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.187999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.188025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.188148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.188312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.188338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.188468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.188612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.188639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.188767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.188881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.188906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.189057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.189181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.189207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.189391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.189537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.189564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.189700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.189827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.189855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.190037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.190158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.190184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.190303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.190446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.190472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.190652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.190802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.190827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.190963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.191089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.191116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.191269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.191422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.191453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.191574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.191709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.191739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.191867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.192043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.192069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.192194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.192318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.521 [2024-07-10 11:00:34.192343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.521 qpair failed and we were unable to recover it. 00:30:17.521 [2024-07-10 11:00:34.192506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.192688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.192713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.192947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.193097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.193123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.193274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.193436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.193463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.193591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.193709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.193735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.193893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.194018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.194044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.194195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.194324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.194350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.194479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.194606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.194632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.194781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.194899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.194925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.195056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.195201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.195228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.195383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.195541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.195568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.195694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.195815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.195842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.196074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.196195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.196222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.196381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.196552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.196578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.196701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.196878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.196904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.197025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.197147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.197172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.197288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.197401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.197432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.197565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.197700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.197726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.197874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.198104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.198129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.198265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.198390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.198417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.198560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.198690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.198716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.198850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.198979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.199005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.199186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.199366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.199392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.199520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.199651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.199677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.199806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.199931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.199956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.200103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.200221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.200248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.200373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.200519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.200546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.200667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.200840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.522 [2024-07-10 11:00:34.200865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.522 qpair failed and we were unable to recover it. 00:30:17.522 [2024-07-10 11:00:34.200984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.201164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.201189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.201342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.201468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.201494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.201658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.201775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.201801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.201953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.202080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.202106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.202239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.202358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.202384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.202510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.202664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.202691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.202812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.202934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.202960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.203088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.203250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.203276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.203435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.203564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.203590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.203724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.203841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.203867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.204018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.204149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.204175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.204319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.204473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.204501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.204626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.204743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.204769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.204889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.205016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.205042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.205198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.205322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.205348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.205491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.205645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.205671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.205826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.205977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.206003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.206129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.206306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.206332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.206479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.206666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.206691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.206828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.207014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.207040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.207162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.207291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.207317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.207477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.207594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.207624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.207776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.207922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.207947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.523 qpair failed and we were unable to recover it. 00:30:17.523 [2024-07-10 11:00:34.208115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.523 [2024-07-10 11:00:34.208270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.208295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.208433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.208589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.208615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.208776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.208896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.208921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.209041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.209163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.209189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.209372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.209532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.209559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.209680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.209827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.209852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.210000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.210123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.210150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.210304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.210469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.210495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.210628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.210759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.210784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.210922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.211153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.211178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.211294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.211418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.211448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.211571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.211720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.211746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.211869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.212003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.212028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.212162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.212306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.212331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.212477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.212624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.212650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.212782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.212936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.212963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.213083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.213197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.213222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.213369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.213487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.213513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.213663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.213795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.213820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.213949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.214094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.214119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.214282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.214442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.214469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.214601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.214719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.214744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.214974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.215092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.215117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.215281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.215400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.215430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.215666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.215793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.215819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.215980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.216101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.216126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.524 [2024-07-10 11:00:34.216307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.216432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.524 [2024-07-10 11:00:34.216458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.524 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.216586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.216733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.216759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.216912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.217091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.217117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.217239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.217365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.217390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.217532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.217655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.217681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.217820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.217961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.217987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.218141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.218269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.218295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.218528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.218682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.218707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.218836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.218962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.218988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.219138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.219291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.219317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.219477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.219599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.219625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.219777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.219938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.219963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.220085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.220243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.220269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.220398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.220553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.220579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.220706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.220820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.220846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.220980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.221118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.221144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.221260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.221373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.221399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.221522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.221649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.221675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.221795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.221912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.221938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.222124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.222239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.222264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.222398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.222559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.222587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.222753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.222899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.222926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.223058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.223211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.223238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.223397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.223555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.223586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.223737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.223887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.223913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.224067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.224220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.224246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.224399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.224542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.224568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.224724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.224874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.525 [2024-07-10 11:00:34.224899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.525 qpair failed and we were unable to recover it. 00:30:17.525 [2024-07-10 11:00:34.225030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.225153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.225179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.225327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.225483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.225509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.225630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.225752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.225777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.225906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.226040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.226065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.226194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.226342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.226368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.226533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.226664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.226694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.226857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.227006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.227032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.227188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.227334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.227360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.227593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.227717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.227743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.227925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.228080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.228105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.228257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.228408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.228439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.228589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.228714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.228740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.228875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.229003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.229030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.229187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.229352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.229377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.229496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.229649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.229676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.229824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.229951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.229977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.230119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.230265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.230290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.230472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.230626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.230652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.230779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.230916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.230942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.231058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.231204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.231230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.231381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.231538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.231565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.231723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.231838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.231863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.231999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.232140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.232166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.232297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.232433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.232459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.232637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.232789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.232814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.232936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.233088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.233113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.233235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.233390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.526 [2024-07-10 11:00:34.233416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.526 qpair failed and we were unable to recover it. 00:30:17.526 [2024-07-10 11:00:34.233543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.233669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.233695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.233845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.233961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.233989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.234121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.234234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.234260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.234419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.234555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.234580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.234714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.234833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.234860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.235013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.235126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.235152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.235304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.235431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.235459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.235594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.235722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.235748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.235901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.236030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.236057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.236235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.236403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.236436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.236608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.236731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.236757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.236912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.237032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.237058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.237189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.237340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.237366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.237501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.237630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.237656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.237782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.237932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.237958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.238080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.238228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.238253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.238385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.238507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.238533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.238672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.238788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.238813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.238971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.239125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.239150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.239280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.239403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.239434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.239591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.239741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.239766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.239881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.240014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.240039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.240186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.240308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.240333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.240490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.240614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.240639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.240791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.240934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.240960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.241094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.241250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.241277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.241437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.241555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.527 [2024-07-10 11:00:34.241581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.527 qpair failed and we were unable to recover it. 00:30:17.527 [2024-07-10 11:00:34.241711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.241853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.241879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.242024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.242153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.242178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.242306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.242457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.242487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.242616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.242733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.242759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.242894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.243023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.243048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.243196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.243316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.243341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.243495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.243620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.243645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.243767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.243922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.243947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.244076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.244205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.244232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.244362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.244507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.244534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.244658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.244809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.244835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.244955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.245104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.245130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.245285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.245406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.245439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.245570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.245702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.245728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.245882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.246001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.246027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.246178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.246329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.246354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.246472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.246598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.246624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.246804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.246964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.246989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.247112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.247264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.247290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.247456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.247610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.247636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.247761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.247883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.247910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.248050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.248205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.248231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.248397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.248558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.248584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.528 qpair failed and we were unable to recover it. 00:30:17.528 [2024-07-10 11:00:34.248735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.528 [2024-07-10 11:00:34.248864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.248889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.249028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.249171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.249197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.249355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.249480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.249506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.249668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.249802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.249828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.249946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.250097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.250122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.250278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.250399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.250431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.250601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.250715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.250741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.250921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.251045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.251071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.251206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.251357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.251383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.251535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.251683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.251708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.251861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.251977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.252003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.252140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.252304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.252329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.252460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.252607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.252632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.252777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.252940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.252967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.253083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.253248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.253274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.253399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.253536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.253561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.253690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.253807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.253833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.253977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.254127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.254153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.254266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.254417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.254472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.254606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.254727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.254754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.254894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.255047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.255073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.255205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.255362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.255388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.255524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.255701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.255727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.255856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.255978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.256004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.256122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.256301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.256327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.256463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.256590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.256617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.256769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.256917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.256943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.529 qpair failed and we were unable to recover it. 00:30:17.529 [2024-07-10 11:00:34.257066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.529 [2024-07-10 11:00:34.257203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.257229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.257349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.257472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.257500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.257631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.257774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.257800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.257960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.258081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.258113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.258233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.258390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.258416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.258544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.258705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.258731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.258873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.259049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.259075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.259193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.259370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.259396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.259523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.259644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.259670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.259791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.259946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.259973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.260101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.260233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.260259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.260386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.260574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.260600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.260777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.260900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.260926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.261057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.261210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.261234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.261369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.261484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.261511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.261627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.261774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.261800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.261928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.262078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.262104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.262285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.262412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.262450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.262567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.262701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.262727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.262848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.262981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.263007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.263131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.263277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.263303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.263417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.263578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.263603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.263754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.263878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.263904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.264031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.264161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.264187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.264313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.264448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.264474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.264635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.264764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.264790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.264954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.265075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.265103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.530 qpair failed and we were unable to recover it. 00:30:17.530 [2024-07-10 11:00:34.265268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.530 [2024-07-10 11:00:34.265450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.265478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.265621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.265752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.265778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.265894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.266020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.266047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.266202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.266320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.266346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.266477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.266604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.266630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.266761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.266911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.266937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.267058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.267185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.267211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.267333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.267466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.267493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.267634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.267825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.267851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.268014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.268130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.268156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.268330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.268459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.268486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.268620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.268740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.268767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.268913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.269077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.269103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.269227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.269355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.269383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.269522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.269647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.269676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.269814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.269935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.269962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.270110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.270223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.270249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.270381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.270533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.270561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.270696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.270846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.270873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.270995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.271144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.271171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.271335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.271459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.271487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.271615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.271732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.271758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.271885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.272012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.272038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.272197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.272377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 11:00:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:17.531 [2024-07-10 11:00:34.272404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.272549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.272681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.272708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 11:00:34 -- common/autotest_common.sh@852 -- # return 0 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.272892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 [2024-07-10 11:00:34.273046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 11:00:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:17.531 [2024-07-10 11:00:34.273073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.531 qpair failed and we were unable to recover it. 00:30:17.531 [2024-07-10 11:00:34.273224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.531 11:00:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:17.532 [2024-07-10 11:00:34.273381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.273409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.273543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:30:17.532 [2024-07-10 11:00:34.273669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.273696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.273824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.273973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.273999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.274146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.274269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.274297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.274459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.274590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.274616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.274745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.274893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.274918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.275068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.275216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.275241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.275362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.275476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.275503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.275622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.275768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.275794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.275928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.276050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.276076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.276201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.276351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.276378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.276527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.276648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.276686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.276812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.276959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.276986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.277119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.277241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.277268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.277385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.277556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.277583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.277711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.277858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.277884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.278035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.278181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.278207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.278353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.278476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.278502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.278628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.278742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.278768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.278895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.279051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.279076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.279223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.279346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.279371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.279534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.279664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.279690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.279813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.279966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.279994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.280146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.280274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.280301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.280460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.280585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.280612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.280772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.280896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.280923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.532 [2024-07-10 11:00:34.281068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.281196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.532 [2024-07-10 11:00:34.281223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.532 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.281348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.281476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.281502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.281628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.281764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.281789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.281937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.282083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.282109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.282226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.282377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.282403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.282553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.282688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.282715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.282837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.282952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.282977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.283115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.283245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.283271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.283388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.283520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.283547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.283667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.283795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.283833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.283949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.284082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.284107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.284260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.284371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.284397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.284557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.284706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.284732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.284860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.284989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.285015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.285150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.285300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.285326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.285511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.285670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.285700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.285830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.285982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.286007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.533 qpair failed and we were unable to recover it. 00:30:17.533 [2024-07-10 11:00:34.286163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.286287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.533 [2024-07-10 11:00:34.286313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.286451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.286578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.286604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.286731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.286861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.286887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.287009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.287160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.287186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.287314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.287467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.287494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.287643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.287773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.287800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.287957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.288071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.288097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.288250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.288403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.288434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.288560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.288685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.288712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.288864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.288983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.289009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.289178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.289310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.289336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.289462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.289623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.289649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.289797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.289945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.289971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.290123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.290268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.290294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.290446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.290596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.290622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.290747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.290899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.290926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.291077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.291238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.291264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.291405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.291560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.291586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.291717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.291875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.291901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.292037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.292157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.292183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.292336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.292482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.292509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.292643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.292774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.292799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.292923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.293069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.293094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.293257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.293383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.293409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.293544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.293663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.293689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.293827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.293973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.294000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.294119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.294241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.294267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.534 qpair failed and we were unable to recover it. 00:30:17.534 [2024-07-10 11:00:34.294434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.534 [2024-07-10 11:00:34.294592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.294618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.294800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.294913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.294940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.295112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.295231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.295256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.295381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.295537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.295564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.295683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.295808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.295834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.295979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.296113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.296139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.296261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 11:00:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.535 [2024-07-10 11:00:34.296410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.296444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.296573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 11:00:34 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.535 [2024-07-10 11:00:34.296732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.296759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.296904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 11:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.535 [2024-07-10 11:00:34.297033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.297060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.297173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:30:17.535 [2024-07-10 11:00:34.297310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.297337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.297456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.297604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.297631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.297758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.297912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.297943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.298068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.298187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.298212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.298336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.298466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.298492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.298620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.298736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.298762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.298900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.299020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.299046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.299196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.299343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.299369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.299527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.299651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.299677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.299837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.299983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.300009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.300139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.300291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.300317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.300482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.300615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.300641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.300798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.300949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.300980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.301099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.301227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.301254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.301408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.301548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.301575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.301706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.301836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.301862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.301984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.302146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.302173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.535 qpair failed and we were unable to recover it. 00:30:17.535 [2024-07-10 11:00:34.302308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.535 [2024-07-10 11:00:34.302466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.302505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.302635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.302799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.302825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.303039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.303161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.303187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.303320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.303449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.303476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.303597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.303730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.303756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.303887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.304019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.304046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.304187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.304309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.304337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.304462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.304606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.304631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.304765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.304913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.304939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.305121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.305243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.305269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.305419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.305592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.305617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.305746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.305891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.305917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.306042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.306191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.306217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.306348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.306497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.306524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.306738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.306858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.306884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.307040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.307199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.307225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.307351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.307476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.307502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.307667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.307823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.307850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.307985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.308150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.308183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.308317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.308471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.308504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.308640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.308771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.308798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.308921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.309038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.309074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.309214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.309393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.309421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.309561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.309690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.309716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.309863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.309996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.310022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.310180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.310325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.310363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.310577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.310726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.310760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.536 qpair failed and we were unable to recover it. 00:30:17.536 [2024-07-10 11:00:34.310914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.536 [2024-07-10 11:00:34.311053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.537 [2024-07-10 11:00:34.311081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.537 qpair failed and we were unable to recover it. 00:30:17.537 [2024-07-10 11:00:34.311218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.537 [2024-07-10 11:00:34.311349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.537 [2024-07-10 11:00:34.311375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.537 qpair failed and we were unable to recover it. 00:30:17.537 [2024-07-10 11:00:34.311508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.537 [2024-07-10 11:00:34.311651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.537 [2024-07-10 11:00:34.311677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.537 qpair failed and we were unable to recover it. 00:30:17.537 [2024-07-10 11:00:34.311848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.537 [2024-07-10 11:00:34.312061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.537 [2024-07-10 11:00:34.312101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.537 qpair failed and we were unable to recover it. 00:30:17.537 [2024-07-10 11:00:34.312290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.537 [2024-07-10 11:00:34.312457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.537 [2024-07-10 11:00:34.312492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.537 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.312656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.312798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.312825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.312958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.313076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.313103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.313238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.313404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.313453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.313712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.313862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.313898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.314070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.314253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.314288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.314440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.314593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.314628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.314808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.314947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.314984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.315171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.315310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.315347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.315515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.315657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.315708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.315863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.315991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.316017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.316181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.316316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.316353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.316574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.316727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.316755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.316887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.317026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.317053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.317180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.317314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.317342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.317496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.317629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.317662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.317820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.317944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.317970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.318091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.318253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.318279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.318436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.318596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.318621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.318740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.318866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.318892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.319042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.319198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.319225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.319340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.319498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.319525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.319687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.319852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.319879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.799 [2024-07-10 11:00:34.320030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.320187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.799 [2024-07-10 11:00:34.320213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.799 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.320341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.320507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.320533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.320669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.320801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.320827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.320971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.321094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.321120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.321243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 Malloc0 00:30:17.800 [2024-07-10 11:00:34.321377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.321404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.321546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.321703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.321730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 11:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.800 [2024-07-10 11:00:34.321845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 11:00:34 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:17.800 [2024-07-10 11:00:34.321975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.322002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 11:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.800 [2024-07-10 11:00:34.322146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:30:17.800 [2024-07-10 11:00:34.322306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.322333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.322461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.322593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.322619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.322738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.322859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.322886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.323041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.323168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.323194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.323354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.323487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.323514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.323631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.323804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.323831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.323995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.324151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.324177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.324300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.324492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.324518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.324653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.324809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.324835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.324986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.324993] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.800 [2024-07-10 11:00:34.325120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.325146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.325268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.325418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.325449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.325613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.325738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.325764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.325885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.326033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.326059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.326182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.326304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.326330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.326489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.326613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.326638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.326773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.326919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.326945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.327077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.327241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.327267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.327448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.327591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.327617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.800 qpair failed and we were unable to recover it. 00:30:17.800 [2024-07-10 11:00:34.327763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.327883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.800 [2024-07-10 11:00:34.327909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.328037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.328164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.328190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.328347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.328506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.328533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.328665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.328804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.328831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.328966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.329138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.329165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.329315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.329464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.329492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.329627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.329756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.329782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.329962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.330084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.330110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.330227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.330395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.330422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.330561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.330683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.330709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.330876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.331021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.331047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.331191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.331340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.331367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.331499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.331632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.331658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.331824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.331953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.331979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.332096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.332225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.332252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.332407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.332565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.332591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.332716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.332872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.332898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.333035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.333164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.333195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 11:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 11:00:34 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.801 [2024-07-10 11:00:34.333359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.333505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 11:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.801 [2024-07-10 11:00:34.333532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:30:17.801 [2024-07-10 11:00:34.333653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.333783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.333809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.333939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.334066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.334093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.334244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.334380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.334406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.334551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.334700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.334726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.334847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.334974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.335001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.335143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.335282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.335308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.801 [2024-07-10 11:00:34.335469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.335617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.801 [2024-07-10 11:00:34.335643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.801 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.335761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.335908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.335934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.336098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.336229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.336255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.336382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.336551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.336578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.336698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.336845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.336872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.337001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.337135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.337161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.337287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.337437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.337464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.337613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.337734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.337762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.337923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.338107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.338133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.338280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.338402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.338432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.338566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.338693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.338720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.338843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.338964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.338991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.339173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.339308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.339334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.339466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.339595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.339621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.339753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.339899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.339925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.340085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.340239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.340265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.340379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.340494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.340520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.340652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.340827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.340853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.340977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.341108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.341134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 11:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.802 [2024-07-10 11:00:34.341262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 11:00:34 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.802 [2024-07-10 11:00:34.341407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.341438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 11:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.802 [2024-07-10 11:00:34.341563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:30:17.802 [2024-07-10 11:00:34.341694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.341722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.341849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.341973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.341999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.342127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.342241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.342268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.342384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.342523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.342550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.342677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.342811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.342838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.342960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.343123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.343149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.802 [2024-07-10 11:00:34.343264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.343398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.802 [2024-07-10 11:00:34.343430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.802 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.343583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.343714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.343740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.343873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.344023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.344048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.344173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.344299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.344325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.344489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.344614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.344640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.344799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.344946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.344973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.345114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.345231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.345257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.345376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.345508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.345535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.345671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.345788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.345814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.345961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.346114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.346140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.346292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.346417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.346447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.346608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.346731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.346757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.346888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.347038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.347064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.347193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.347323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.347349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.347503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.347632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.347660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.347788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.347945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.347972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.348121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.348271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.348297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.348413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.348540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.348566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.348728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.348884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.348910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.349040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.349185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.349211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 11:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.803 [2024-07-10 11:00:34.349340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 11:00:34 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.803 [2024-07-10 11:00:34.349489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.349516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 11:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.803 [2024-07-10 11:00:34.349664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:30:17.803 [2024-07-10 11:00:34.349800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.349827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.349944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.350077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.350103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.350224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.350369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.350395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.350519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.350643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.350669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.350819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.350935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.350967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.803 [2024-07-10 11:00:34.351119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.351246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.803 [2024-07-10 11:00:34.351272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.803 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.351398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.351551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.351578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.351711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.351839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.351866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.352025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.352177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.352203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.352352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.352498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.352526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.352692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.352839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.352865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.352984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.353139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.804 [2024-07-10 11:00:34.353165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae350 with addr=10.0.0.2, port=4420 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.353238] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.804 [2024-07-10 11:00:34.355722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.804 [2024-07-10 11:00:34.355871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.804 [2024-07-10 11:00:34.355899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.804 [2024-07-10 11:00:34.355916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.804 [2024-07-10 11:00:34.355930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.804 [2024-07-10 11:00:34.355964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 11:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.804 11:00:34 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:17.804 11:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.804 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:30:17.804 11:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.804 11:00:34 -- host/target_disconnect.sh@58 -- # wait 3587353 00:30:17.804 [2024-07-10 11:00:34.365655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.804 [2024-07-10 11:00:34.365791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.804 [2024-07-10 11:00:34.365820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.804 [2024-07-10 11:00:34.365837] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.804 [2024-07-10 11:00:34.365851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.804 [2024-07-10 11:00:34.365895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.375597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.804 [2024-07-10 11:00:34.375728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.804 [2024-07-10 11:00:34.375757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.804 [2024-07-10 11:00:34.375775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.804 [2024-07-10 11:00:34.375789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.804 [2024-07-10 11:00:34.375833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.385644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.804 [2024-07-10 11:00:34.385786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.804 [2024-07-10 11:00:34.385814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.804 [2024-07-10 11:00:34.385830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.804 [2024-07-10 11:00:34.385843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.804 [2024-07-10 11:00:34.385872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.395603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.804 [2024-07-10 11:00:34.395737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.804 [2024-07-10 11:00:34.395764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.804 [2024-07-10 11:00:34.395780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.804 [2024-07-10 11:00:34.395794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.804 [2024-07-10 11:00:34.395823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.405637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.804 [2024-07-10 11:00:34.405765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.804 [2024-07-10 11:00:34.405801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.804 [2024-07-10 11:00:34.405816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.804 [2024-07-10 11:00:34.405830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.804 [2024-07-10 11:00:34.405874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.804 qpair failed and we were unable to recover it. 00:30:17.804 [2024-07-10 11:00:34.415643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.804 [2024-07-10 11:00:34.415774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.804 [2024-07-10 11:00:34.415800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.804 [2024-07-10 11:00:34.415815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.415828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.415857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.425630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.425758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.425783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.425798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.425811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.425840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.435741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.435891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.435917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.435932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.435946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.435975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.445726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.445858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.445896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.445922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.445955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.445985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.455755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.455910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.455936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.455951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.455965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.456008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.465861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.465994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.466020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.466036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.466050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.466078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.475825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.475958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.475984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.475999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.476013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.476043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.485832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.485961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.485987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.486003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.486017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.486046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.496013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.496155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.496181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.496196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.496209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.496238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.505963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.506100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.506127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.506142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.506159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.506190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.516022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.516162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.516189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.516205] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.516221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.516250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.526035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.526165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.526193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.526208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.526222] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.526251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.535979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.805 [2024-07-10 11:00:34.536105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.805 [2024-07-10 11:00:34.536132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.805 [2024-07-10 11:00:34.536153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.805 [2024-07-10 11:00:34.536168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.805 [2024-07-10 11:00:34.536197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.805 qpair failed and we were unable to recover it. 00:30:17.805 [2024-07-10 11:00:34.545994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.806 [2024-07-10 11:00:34.546125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.806 [2024-07-10 11:00:34.546151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.806 [2024-07-10 11:00:34.546166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.806 [2024-07-10 11:00:34.546180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.806 [2024-07-10 11:00:34.546208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.806 qpair failed and we were unable to recover it. 00:30:17.806 [2024-07-10 11:00:34.556060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.806 [2024-07-10 11:00:34.556196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.806 [2024-07-10 11:00:34.556221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.806 [2024-07-10 11:00:34.556236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.806 [2024-07-10 11:00:34.556250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.806 [2024-07-10 11:00:34.556279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.806 qpair failed and we were unable to recover it. 00:30:17.806 [2024-07-10 11:00:34.566064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.806 [2024-07-10 11:00:34.566193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.806 [2024-07-10 11:00:34.566219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.806 [2024-07-10 11:00:34.566235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.806 [2024-07-10 11:00:34.566250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.806 [2024-07-10 11:00:34.566279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.806 qpair failed and we were unable to recover it. 00:30:17.806 [2024-07-10 11:00:34.576070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.806 [2024-07-10 11:00:34.576205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.806 [2024-07-10 11:00:34.576231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.806 [2024-07-10 11:00:34.576247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.806 [2024-07-10 11:00:34.576261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.806 [2024-07-10 11:00:34.576291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.806 qpair failed and we were unable to recover it. 00:30:17.806 [2024-07-10 11:00:34.586125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.806 [2024-07-10 11:00:34.586297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.806 [2024-07-10 11:00:34.586323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.806 [2024-07-10 11:00:34.586339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.806 [2024-07-10 11:00:34.586352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.806 [2024-07-10 11:00:34.586380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.806 qpair failed and we were unable to recover it. 00:30:17.806 [2024-07-10 11:00:34.596154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.806 [2024-07-10 11:00:34.596331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.806 [2024-07-10 11:00:34.596357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.806 [2024-07-10 11:00:34.596372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.806 [2024-07-10 11:00:34.596386] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.806 [2024-07-10 11:00:34.596414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.806 qpair failed and we were unable to recover it. 00:30:17.806 [2024-07-10 11:00:34.606211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.806 [2024-07-10 11:00:34.606359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.806 [2024-07-10 11:00:34.606389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.806 [2024-07-10 11:00:34.606406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.806 [2024-07-10 11:00:34.606420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.806 [2024-07-10 11:00:34.606468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.806 qpair failed and we were unable to recover it. 00:30:17.806 [2024-07-10 11:00:34.616233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.806 [2024-07-10 11:00:34.616380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.806 [2024-07-10 11:00:34.616409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.806 [2024-07-10 11:00:34.616433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.806 [2024-07-10 11:00:34.616449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:17.806 [2024-07-10 11:00:34.616491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.806 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.626246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.626380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.626409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.626437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.626454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.626489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.636314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.636470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.636499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.636515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.636529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.636561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.646306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.646451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.646478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.646494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.646508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.646538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.656337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.656512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.656538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.656554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.656567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.656597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.666331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.666469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.666514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.666530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.666545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.666575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.676406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.676545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.676581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.676597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.676611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.676640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.686401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.686543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.686570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.686586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.686600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.686628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.696470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.696638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.696665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.696679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.696693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.696722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.706468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.706598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.706625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.706640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.706653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.706682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.716509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.716680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.716706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.716727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.716742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.716772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.726498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.726633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.726660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.726675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.726688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.726719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.736549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.736677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.736703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.064 [2024-07-10 11:00:34.736718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.064 [2024-07-10 11:00:34.736731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.064 [2024-07-10 11:00:34.736760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.064 qpair failed and we were unable to recover it. 00:30:18.064 [2024-07-10 11:00:34.746597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.064 [2024-07-10 11:00:34.746733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.064 [2024-07-10 11:00:34.746759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.746774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.746788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.746817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.756612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.756750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.756779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.756798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.756812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.756842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.766613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.766737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.766762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.766777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.766791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.766820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.776630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.776801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.776828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.776843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.776857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.776887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.786708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.786879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.786906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.786922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.786936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.786966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.796727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.796860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.796887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.796902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.796915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.796944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.806730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.806860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.806891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.806908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.806922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.806951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.816829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.816982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.817009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.817024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.817038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.817067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.826808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.826985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.827012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.827027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.827041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.827069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.836849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.837021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.837048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.837063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.837077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.837105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.846864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.846998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.847026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.847042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.847055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.847086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.856863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.856985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.857009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.857024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.857037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.857066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.866955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.867121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.867147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.867163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.867176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.867205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.876925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.877060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.877086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.877101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.877114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.877144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.065 [2024-07-10 11:00:34.886982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.065 [2024-07-10 11:00:34.887124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.065 [2024-07-10 11:00:34.887153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.065 [2024-07-10 11:00:34.887170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.065 [2024-07-10 11:00:34.887184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.065 [2024-07-10 11:00:34.887217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.065 qpair failed and we were unable to recover it. 00:30:18.323 [2024-07-10 11:00:34.897035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.323 [2024-07-10 11:00:34.897201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.323 [2024-07-10 11:00:34.897234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.323 [2024-07-10 11:00:34.897252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.323 [2024-07-10 11:00:34.897267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.323 [2024-07-10 11:00:34.897298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.323 qpair failed and we were unable to recover it. 00:30:18.323 [2024-07-10 11:00:34.907024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.323 [2024-07-10 11:00:34.907194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.323 [2024-07-10 11:00:34.907220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.323 [2024-07-10 11:00:34.907236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.323 [2024-07-10 11:00:34.907250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.323 [2024-07-10 11:00:34.907280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.323 qpair failed and we were unable to recover it. 00:30:18.323 [2024-07-10 11:00:34.917084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.323 [2024-07-10 11:00:34.917218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.323 [2024-07-10 11:00:34.917244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.323 [2024-07-10 11:00:34.917260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.323 [2024-07-10 11:00:34.917274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.323 [2024-07-10 11:00:34.917302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.323 qpair failed and we were unable to recover it. 00:30:18.323 [2024-07-10 11:00:34.927064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.323 [2024-07-10 11:00:34.927187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.323 [2024-07-10 11:00:34.927213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.323 [2024-07-10 11:00:34.927228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:34.927241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:34.927271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:34.937161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:34.937324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:34.937351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:34.937366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:34.937379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:34.937414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:34.947155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:34.947287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:34.947314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:34.947328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:34.947342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:34.947372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:34.957200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:34.957327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:34.957353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:34.957368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:34.957381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:34.957410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:34.967234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:34.967382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:34.967411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:34.967435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:34.967452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:34.967482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:34.977256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:34.977385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:34.977412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:34.977433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:34.977448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:34.977478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:34.987263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:34.987456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:34.987488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:34.987504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:34.987519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:34.987549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:34.997282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:34.997411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:34.997444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:34.997459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:34.997473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:34.997503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.007377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.007519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.007545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.007560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.007574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.007604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.017349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.017478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.017505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.017520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.017534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.017563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.027367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.027514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.027540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.027555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.027568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.027603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.037419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.037560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.037586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.037601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.037614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.037644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.047418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.047546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.047572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.047586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.047599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.047629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.057476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.057607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.057634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.057649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.057662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.057692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.067485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.067615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.067641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.067656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.067669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.067699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.077512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.077638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.077669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.077684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.077698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.077727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.087580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.087728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.087754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.087769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.087784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.087813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.097599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.097726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.097752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.097767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.097780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.097811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.107642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.107821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.107847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.107862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.107875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.107905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.117668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.117824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.117850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.117865] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.117878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.117917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.127685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.127827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.127852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.127867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.127880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.127910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.137706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.137832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.324 [2024-07-10 11:00:35.137859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.324 [2024-07-10 11:00:35.137874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.324 [2024-07-10 11:00:35.137887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.324 [2024-07-10 11:00:35.137916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.324 qpair failed and we were unable to recover it. 00:30:18.324 [2024-07-10 11:00:35.147733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.324 [2024-07-10 11:00:35.147868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.581 [2024-07-10 11:00:35.147896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.581 [2024-07-10 11:00:35.147912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.581 [2024-07-10 11:00:35.147927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.581 [2024-07-10 11:00:35.147957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-10 11:00:35.157798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.581 [2024-07-10 11:00:35.157929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.581 [2024-07-10 11:00:35.157957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.581 [2024-07-10 11:00:35.157972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.581 [2024-07-10 11:00:35.157986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.581 [2024-07-10 11:00:35.158017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-10 11:00:35.167795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.581 [2024-07-10 11:00:35.167925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.581 [2024-07-10 11:00:35.167956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.581 [2024-07-10 11:00:35.167972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.581 [2024-07-10 11:00:35.167986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.581 [2024-07-10 11:00:35.168016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-10 11:00:35.177871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.581 [2024-07-10 11:00:35.178052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.581 [2024-07-10 11:00:35.178078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.581 [2024-07-10 11:00:35.178093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.581 [2024-07-10 11:00:35.178107] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.581 [2024-07-10 11:00:35.178136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-10 11:00:35.187858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.581 [2024-07-10 11:00:35.188008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.581 [2024-07-10 11:00:35.188034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.581 [2024-07-10 11:00:35.188049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.581 [2024-07-10 11:00:35.188064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.581 [2024-07-10 11:00:35.188093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-10 11:00:35.197887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.581 [2024-07-10 11:00:35.198068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.581 [2024-07-10 11:00:35.198095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.581 [2024-07-10 11:00:35.198110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.581 [2024-07-10 11:00:35.198123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.581 [2024-07-10 11:00:35.198153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.581 qpair failed and we were unable to recover it. 00:30:18.581 [2024-07-10 11:00:35.207937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.581 [2024-07-10 11:00:35.208066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.581 [2024-07-10 11:00:35.208093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.581 [2024-07-10 11:00:35.208108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.581 [2024-07-10 11:00:35.208121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.208156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.217955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.218106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.218133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.218149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.218167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.218198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.227985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.228124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.228150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.228166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.228183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.228213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.238097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.238247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.238273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.238288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.238303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.238332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.247996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.248120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.248147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.248162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.248175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.248204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.258127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.258259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.258290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.258309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.258325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.258369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.268097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.268237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.268264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.268280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.268293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.268322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.278084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.278210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.278236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.278251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.278266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.278295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.288144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.288273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.288299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.288314] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.288329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.288358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.298264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.298390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.298415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.298438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.298459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.298489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.308178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.308308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.308335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.308350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.308363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.308393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.318226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.318384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.318409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.318431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.318446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.318475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.328322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.328450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.328474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.328488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.328501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.328529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.338386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.338570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.582 [2024-07-10 11:00:35.338596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.582 [2024-07-10 11:00:35.338612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.582 [2024-07-10 11:00:35.338627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.582 [2024-07-10 11:00:35.338656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.582 qpair failed and we were unable to recover it. 00:30:18.582 [2024-07-10 11:00:35.348302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.582 [2024-07-10 11:00:35.348443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.583 [2024-07-10 11:00:35.348469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.583 [2024-07-10 11:00:35.348484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.583 [2024-07-10 11:00:35.348497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.583 [2024-07-10 11:00:35.348527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-10 11:00:35.358422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.583 [2024-07-10 11:00:35.358557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.583 [2024-07-10 11:00:35.358582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.583 [2024-07-10 11:00:35.358597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.583 [2024-07-10 11:00:35.358611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.583 [2024-07-10 11:00:35.358640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-10 11:00:35.368366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.583 [2024-07-10 11:00:35.368526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.583 [2024-07-10 11:00:35.368552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.583 [2024-07-10 11:00:35.368568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.583 [2024-07-10 11:00:35.368581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.583 [2024-07-10 11:00:35.368612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-10 11:00:35.378450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.583 [2024-07-10 11:00:35.378601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.583 [2024-07-10 11:00:35.378626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.583 [2024-07-10 11:00:35.378641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.583 [2024-07-10 11:00:35.378654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.583 [2024-07-10 11:00:35.378684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-10 11:00:35.388456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.583 [2024-07-10 11:00:35.388612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.583 [2024-07-10 11:00:35.388638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.583 [2024-07-10 11:00:35.388653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.583 [2024-07-10 11:00:35.388673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.583 [2024-07-10 11:00:35.388703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.583 [2024-07-10 11:00:35.398488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.583 [2024-07-10 11:00:35.398615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.583 [2024-07-10 11:00:35.398641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.583 [2024-07-10 11:00:35.398656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.583 [2024-07-10 11:00:35.398669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.583 [2024-07-10 11:00:35.398699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.583 qpair failed and we were unable to recover it. 00:30:18.841 [2024-07-10 11:00:35.408565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.841 [2024-07-10 11:00:35.408726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.841 [2024-07-10 11:00:35.408755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.841 [2024-07-10 11:00:35.408771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.841 [2024-07-10 11:00:35.408785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.408816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.418583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.418730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.418757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.418772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.418786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.418816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.428561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.428692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.428718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.428733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.428748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.428778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.438609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.438744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.438771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.438785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.438799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.438829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.448602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.448739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.448766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.448782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.448799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.448829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.458626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.458798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.458825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.458841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.458855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.458884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.468663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.468842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.468867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.468882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.468895] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.468925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.478664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.478793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.478819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.478834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.478854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.478884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.488698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.488829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.488855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.488870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.488885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.488914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.498761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.498897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.498922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.498937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.498950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.498979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.508784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.508924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.508950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.508965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.508978] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.509008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.518785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.518957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.518983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.518998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.519011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.519041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.528839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.528975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.529001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.529016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.529031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.529060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.538848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.538980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.539005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.539021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.539035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.539064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.548869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.549006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.549032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.549046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.549060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.549089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.558958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.559133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.559159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.559174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.559188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.559217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.568912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.569055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.569081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.569095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.569115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.569144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.578970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.579105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.579131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.579146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.579161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.579190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.589069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.589210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.589236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.589251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.589265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.589295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.599008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.599186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.599212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.599226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.599239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.599269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.609013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.609139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.609165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.609180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.609194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.609223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.619070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.619207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.619232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.619247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.619260] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.619290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.629083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.629215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.629240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.629255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.629268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.629298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.639111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.639243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.639269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.639284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.842 [2024-07-10 11:00:35.639298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.842 [2024-07-10 11:00:35.639328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.842 qpair failed and we were unable to recover it. 00:30:18.842 [2024-07-10 11:00:35.649153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.842 [2024-07-10 11:00:35.649281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.842 [2024-07-10 11:00:35.649306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.842 [2024-07-10 11:00:35.649321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.843 [2024-07-10 11:00:35.649334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.843 [2024-07-10 11:00:35.649365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.843 qpair failed and we were unable to recover it. 00:30:18.843 [2024-07-10 11:00:35.659206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.843 [2024-07-10 11:00:35.659347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.843 [2024-07-10 11:00:35.659372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.843 [2024-07-10 11:00:35.659393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.843 [2024-07-10 11:00:35.659408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:18.843 [2024-07-10 11:00:35.659444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:18.843 qpair failed and we were unable to recover it. 00:30:19.101 [2024-07-10 11:00:35.669234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.669421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.669465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.669485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.669500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.669535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.679242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.679370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.679397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.679413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.679431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.679462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.689257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.689419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.689451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.689467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.689480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.689510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.699297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.699432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.699459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.699474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.699488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.699518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.709362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.709527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.709554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.709569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.709583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.709613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.719416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.719566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.719592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.719607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.719621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.719651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.729399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.729546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.729573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.729588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.729602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.729630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.739406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.739552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.739578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.739593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.739607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.739636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.749446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.749584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.749610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.749632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.749647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.749675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.759499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.759682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.759718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.759733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.759747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.759776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.769546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.769679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.769704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.769726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.769739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.769769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.779540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.779721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.779747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.102 [2024-07-10 11:00:35.779762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.102 [2024-07-10 11:00:35.779776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.102 [2024-07-10 11:00:35.779805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.102 qpair failed and we were unable to recover it. 00:30:19.102 [2024-07-10 11:00:35.789583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.102 [2024-07-10 11:00:35.789766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.102 [2024-07-10 11:00:35.789791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.789806] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.789820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.789849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.799625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.799787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.799812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.799827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.799841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.799870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.809635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.809813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.809839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.809854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.809868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.809897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.819647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.819801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.819827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.819841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.819855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.819884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.829697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.829835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.829860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.829874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.829888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.829917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.839767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.839953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.839979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.840000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.840016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.840046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.849744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.849876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.849901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.849917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.849931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.849960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.859757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.859889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.859914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.859929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.859943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.859973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.869827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.869964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.869990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.870006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.870020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.870049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.879893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.880034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.880060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.880075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.880092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.880121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.889863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.889998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.890024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.890039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.890052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.890081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.899857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.899991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.900017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.900032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.900046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.900074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.909942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.103 [2024-07-10 11:00:35.910072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.103 [2024-07-10 11:00:35.910099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.103 [2024-07-10 11:00:35.910114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.103 [2024-07-10 11:00:35.910128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.103 [2024-07-10 11:00:35.910156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.103 qpair failed and we were unable to recover it. 00:30:19.103 [2024-07-10 11:00:35.919956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.104 [2024-07-10 11:00:35.920086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.104 [2024-07-10 11:00:35.920113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.104 [2024-07-10 11:00:35.920128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.104 [2024-07-10 11:00:35.920142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.104 [2024-07-10 11:00:35.920171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.104 qpair failed and we were unable to recover it. 00:30:19.362 [2024-07-10 11:00:35.929958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.362 [2024-07-10 11:00:35.930088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.362 [2024-07-10 11:00:35.930116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.362 [2024-07-10 11:00:35.930141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.362 [2024-07-10 11:00:35.930156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.362 [2024-07-10 11:00:35.930187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.362 qpair failed and we were unable to recover it. 00:30:19.362 [2024-07-10 11:00:35.939974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.362 [2024-07-10 11:00:35.940098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.362 [2024-07-10 11:00:35.940125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.362 [2024-07-10 11:00:35.940140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.362 [2024-07-10 11:00:35.940153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.362 [2024-07-10 11:00:35.940183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.362 qpair failed and we were unable to recover it. 00:30:19.362 [2024-07-10 11:00:35.950028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.362 [2024-07-10 11:00:35.950173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.362 [2024-07-10 11:00:35.950200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.362 [2024-07-10 11:00:35.950215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.362 [2024-07-10 11:00:35.950229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.362 [2024-07-10 11:00:35.950258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.362 qpair failed and we were unable to recover it. 00:30:19.362 [2024-07-10 11:00:35.960079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.362 [2024-07-10 11:00:35.960205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.362 [2024-07-10 11:00:35.960232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.362 [2024-07-10 11:00:35.960247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.362 [2024-07-10 11:00:35.960261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.362 [2024-07-10 11:00:35.960291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.362 qpair failed and we were unable to recover it. 00:30:19.362 [2024-07-10 11:00:35.970058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.362 [2024-07-10 11:00:35.970199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.362 [2024-07-10 11:00:35.970225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.362 [2024-07-10 11:00:35.970241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.362 [2024-07-10 11:00:35.970255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.362 [2024-07-10 11:00:35.970283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.362 qpair failed and we were unable to recover it. 00:30:19.362 [2024-07-10 11:00:35.980125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.362 [2024-07-10 11:00:35.980254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.362 [2024-07-10 11:00:35.980280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.362 [2024-07-10 11:00:35.980295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.362 [2024-07-10 11:00:35.980309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.362 [2024-07-10 11:00:35.980338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.362 qpair failed and we were unable to recover it. 00:30:19.362 [2024-07-10 11:00:35.990144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.362 [2024-07-10 11:00:35.990283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.362 [2024-07-10 11:00:35.990309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.362 [2024-07-10 11:00:35.990325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.362 [2024-07-10 11:00:35.990339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.362 [2024-07-10 11:00:35.990368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.362 qpair failed and we were unable to recover it. 00:30:19.362 [2024-07-10 11:00:36.000184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.362 [2024-07-10 11:00:36.000315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.362 [2024-07-10 11:00:36.000342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.362 [2024-07-10 11:00:36.000357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.362 [2024-07-10 11:00:36.000370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.362 [2024-07-10 11:00:36.000399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.362 qpair failed and we were unable to recover it. 00:30:19.362 [2024-07-10 11:00:36.010223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.010345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.010372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.010387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.010400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.010437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.020212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.020368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.020399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.020417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.020440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.020471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.030293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.030418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.030450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.030466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.030479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.030509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.040309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.040460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.040488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.040507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.040520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.040550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.050311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.050444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.050470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.050486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.050499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.050528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.060344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.060484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.060510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.060525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.060539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.060568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.070388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.070530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.070556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.070571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.070585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.070614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.080433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.080615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.080643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.080662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.080677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.080706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.090415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.090581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.090608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.090623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.090636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.090665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.100503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.100669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.100695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.100710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.100724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.100754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.110505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.110637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.110668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.110684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.110698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.110728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.120546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.120698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.120725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.363 [2024-07-10 11:00:36.120740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.363 [2024-07-10 11:00:36.120754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.363 [2024-07-10 11:00:36.120784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.363 qpair failed and we were unable to recover it. 00:30:19.363 [2024-07-10 11:00:36.130541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.363 [2024-07-10 11:00:36.130668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.363 [2024-07-10 11:00:36.130694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.364 [2024-07-10 11:00:36.130709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.364 [2024-07-10 11:00:36.130722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.364 [2024-07-10 11:00:36.130751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.364 qpair failed and we were unable to recover it. 00:30:19.364 [2024-07-10 11:00:36.140596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.364 [2024-07-10 11:00:36.140724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.364 [2024-07-10 11:00:36.140750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.364 [2024-07-10 11:00:36.140765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.364 [2024-07-10 11:00:36.140778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.364 [2024-07-10 11:00:36.140807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.364 qpair failed and we were unable to recover it. 00:30:19.364 [2024-07-10 11:00:36.150672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.364 [2024-07-10 11:00:36.150837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.364 [2024-07-10 11:00:36.150864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.364 [2024-07-10 11:00:36.150878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.364 [2024-07-10 11:00:36.150891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.364 [2024-07-10 11:00:36.150925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.364 qpair failed and we were unable to recover it. 00:30:19.364 [2024-07-10 11:00:36.160660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.364 [2024-07-10 11:00:36.160797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.364 [2024-07-10 11:00:36.160823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.364 [2024-07-10 11:00:36.160837] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.364 [2024-07-10 11:00:36.160851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.364 [2024-07-10 11:00:36.160880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.364 qpair failed and we were unable to recover it. 00:30:19.364 [2024-07-10 11:00:36.170728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.364 [2024-07-10 11:00:36.170859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.364 [2024-07-10 11:00:36.170884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.364 [2024-07-10 11:00:36.170900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.364 [2024-07-10 11:00:36.170913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.364 [2024-07-10 11:00:36.170942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.364 qpair failed and we were unable to recover it. 00:30:19.364 [2024-07-10 11:00:36.180698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.364 [2024-07-10 11:00:36.180870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.364 [2024-07-10 11:00:36.180896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.364 [2024-07-10 11:00:36.180911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.364 [2024-07-10 11:00:36.180925] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.364 [2024-07-10 11:00:36.180953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.364 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-10 11:00:36.190737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.623 [2024-07-10 11:00:36.190933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.623 [2024-07-10 11:00:36.190962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.623 [2024-07-10 11:00:36.190990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.623 [2024-07-10 11:00:36.191017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.623 [2024-07-10 11:00:36.191058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-10 11:00:36.200763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.623 [2024-07-10 11:00:36.200905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.623 [2024-07-10 11:00:36.200938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.623 [2024-07-10 11:00:36.200956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.623 [2024-07-10 11:00:36.200969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.623 [2024-07-10 11:00:36.200999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-10 11:00:36.210785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.623 [2024-07-10 11:00:36.210975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.623 [2024-07-10 11:00:36.211002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.623 [2024-07-10 11:00:36.211019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.623 [2024-07-10 11:00:36.211032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.623 [2024-07-10 11:00:36.211062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-10 11:00:36.220877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.623 [2024-07-10 11:00:36.221036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.623 [2024-07-10 11:00:36.221062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.623 [2024-07-10 11:00:36.221077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.623 [2024-07-10 11:00:36.221091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.623 [2024-07-10 11:00:36.221120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-10 11:00:36.230897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.623 [2024-07-10 11:00:36.231053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.623 [2024-07-10 11:00:36.231079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.623 [2024-07-10 11:00:36.231095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.623 [2024-07-10 11:00:36.231108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.623 [2024-07-10 11:00:36.231137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-10 11:00:36.240967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.623 [2024-07-10 11:00:36.241152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.623 [2024-07-10 11:00:36.241181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.623 [2024-07-10 11:00:36.241196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.623 [2024-07-10 11:00:36.241213] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.623 [2024-07-10 11:00:36.241249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-10 11:00:36.251023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.623 [2024-07-10 11:00:36.251159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.623 [2024-07-10 11:00:36.251187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.623 [2024-07-10 11:00:36.251207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.623 [2024-07-10 11:00:36.251222] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.623 [2024-07-10 11:00:36.251252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.623 qpair failed and we were unable to recover it. 00:30:19.623 [2024-07-10 11:00:36.260985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.261158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.624 [2024-07-10 11:00:36.261185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.624 [2024-07-10 11:00:36.261201] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.624 [2024-07-10 11:00:36.261214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.624 [2024-07-10 11:00:36.261243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-10 11:00:36.271072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.271213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.624 [2024-07-10 11:00:36.271240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.624 [2024-07-10 11:00:36.271255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.624 [2024-07-10 11:00:36.271269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.624 [2024-07-10 11:00:36.271298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-10 11:00:36.281006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.281189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.624 [2024-07-10 11:00:36.281215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.624 [2024-07-10 11:00:36.281230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.624 [2024-07-10 11:00:36.281244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.624 [2024-07-10 11:00:36.281274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-10 11:00:36.291084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.291213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.624 [2024-07-10 11:00:36.291245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.624 [2024-07-10 11:00:36.291262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.624 [2024-07-10 11:00:36.291276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.624 [2024-07-10 11:00:36.291305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-10 11:00:36.301111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.301238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.624 [2024-07-10 11:00:36.301267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.624 [2024-07-10 11:00:36.301283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.624 [2024-07-10 11:00:36.301297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.624 [2024-07-10 11:00:36.301326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-10 11:00:36.311126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.311298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.624 [2024-07-10 11:00:36.311325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.624 [2024-07-10 11:00:36.311341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.624 [2024-07-10 11:00:36.311355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.624 [2024-07-10 11:00:36.311385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-10 11:00:36.321131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.321272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.624 [2024-07-10 11:00:36.321299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.624 [2024-07-10 11:00:36.321314] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.624 [2024-07-10 11:00:36.321328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.624 [2024-07-10 11:00:36.321358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-10 11:00:36.331158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.331307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.624 [2024-07-10 11:00:36.331331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.624 [2024-07-10 11:00:36.331346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.624 [2024-07-10 11:00:36.331359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.624 [2024-07-10 11:00:36.331394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-10 11:00:36.341243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.341372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.624 [2024-07-10 11:00:36.341398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.624 [2024-07-10 11:00:36.341413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.624 [2024-07-10 11:00:36.341434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.624 [2024-07-10 11:00:36.341465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-10 11:00:36.351237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.351373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.624 [2024-07-10 11:00:36.351401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.624 [2024-07-10 11:00:36.351420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.624 [2024-07-10 11:00:36.351446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.624 [2024-07-10 11:00:36.351477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-10 11:00:36.361225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.361382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.624 [2024-07-10 11:00:36.361409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.624 [2024-07-10 11:00:36.361431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.624 [2024-07-10 11:00:36.361446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.624 [2024-07-10 11:00:36.361476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.624 qpair failed and we were unable to recover it. 00:30:19.624 [2024-07-10 11:00:36.371302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.624 [2024-07-10 11:00:36.371440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.625 [2024-07-10 11:00:36.371465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.625 [2024-07-10 11:00:36.371480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.625 [2024-07-10 11:00:36.371494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.625 [2024-07-10 11:00:36.371522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-10 11:00:36.381290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.625 [2024-07-10 11:00:36.381417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.625 [2024-07-10 11:00:36.381453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.625 [2024-07-10 11:00:36.381469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.625 [2024-07-10 11:00:36.381483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.625 [2024-07-10 11:00:36.381512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-10 11:00:36.391350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.625 [2024-07-10 11:00:36.391519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.625 [2024-07-10 11:00:36.391546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.625 [2024-07-10 11:00:36.391562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.625 [2024-07-10 11:00:36.391575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.625 [2024-07-10 11:00:36.391604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-10 11:00:36.401358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.625 [2024-07-10 11:00:36.401491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.625 [2024-07-10 11:00:36.401516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.625 [2024-07-10 11:00:36.401530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.625 [2024-07-10 11:00:36.401544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.625 [2024-07-10 11:00:36.401573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-10 11:00:36.411458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.625 [2024-07-10 11:00:36.411589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.625 [2024-07-10 11:00:36.411617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.625 [2024-07-10 11:00:36.411636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.625 [2024-07-10 11:00:36.411651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.625 [2024-07-10 11:00:36.411680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-10 11:00:36.421421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.625 [2024-07-10 11:00:36.421597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.625 [2024-07-10 11:00:36.421624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.625 [2024-07-10 11:00:36.421639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.625 [2024-07-10 11:00:36.421653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.625 [2024-07-10 11:00:36.421690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-10 11:00:36.431470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.625 [2024-07-10 11:00:36.431606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.625 [2024-07-10 11:00:36.431632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.625 [2024-07-10 11:00:36.431647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.625 [2024-07-10 11:00:36.431660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.625 [2024-07-10 11:00:36.431689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.625 [2024-07-10 11:00:36.441506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.625 [2024-07-10 11:00:36.441637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.625 [2024-07-10 11:00:36.441663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.625 [2024-07-10 11:00:36.441678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.625 [2024-07-10 11:00:36.441702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.625 [2024-07-10 11:00:36.441731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.625 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.451530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.451719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.451754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.451785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.451813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.451848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.461527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.461658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.461684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.461699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.461713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.461743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.471606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.471747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.471780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.471797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.471811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.471841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.481601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.481779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.481805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.481821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.481835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.481864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.491707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.491835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.491862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.491877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.491891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.491920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.501635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.501777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.501803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.501819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.501833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.501862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.511797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.511964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.511993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.512009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.512032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.512064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.521789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.521918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.521946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.521961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.521975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.522004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.531797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.531931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.531958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.531974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.531988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.532017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.541837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.542001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.542028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.542044] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.542058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.542086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.551804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.551933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.551959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.551975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.551989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.552017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.561824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.561960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.561987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.562002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.562016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.562045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.571853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.572030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.572057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.572073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.572087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.572117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.581873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.581998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.582024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.582039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.582053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.582082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.591919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.592060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.592087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.592103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.592117] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.592145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.601937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.602063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.602090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.602105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.602124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.602153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.612010] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.612140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.612167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.612182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.612196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.612224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.621997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.622120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.622145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.622160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.622174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.622203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.632041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.632172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.632197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.632212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.632226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.632255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.642095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.642223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.642250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.642265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.642279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.642309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.652083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.652218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.652244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.652259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.652273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.652302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.662119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.662266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.662295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.884 [2024-07-10 11:00:36.662311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.884 [2024-07-10 11:00:36.662325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.884 [2024-07-10 11:00:36.662355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.884 qpair failed and we were unable to recover it. 00:30:19.884 [2024-07-10 11:00:36.672169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.884 [2024-07-10 11:00:36.672333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.884 [2024-07-10 11:00:36.672361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.885 [2024-07-10 11:00:36.672377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.885 [2024-07-10 11:00:36.672391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.885 [2024-07-10 11:00:36.672420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.885 qpair failed and we were unable to recover it. 00:30:19.885 [2024-07-10 11:00:36.682213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.885 [2024-07-10 11:00:36.682341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.885 [2024-07-10 11:00:36.682368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.885 [2024-07-10 11:00:36.682384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.885 [2024-07-10 11:00:36.682397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.885 [2024-07-10 11:00:36.682433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.885 qpair failed and we were unable to recover it. 00:30:19.885 [2024-07-10 11:00:36.692228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.885 [2024-07-10 11:00:36.692352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.885 [2024-07-10 11:00:36.692377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.885 [2024-07-10 11:00:36.692391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.885 [2024-07-10 11:00:36.692412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.885 [2024-07-10 11:00:36.692450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.885 qpair failed and we were unable to recover it. 00:30:19.885 [2024-07-10 11:00:36.702238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.885 [2024-07-10 11:00:36.702385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.885 [2024-07-10 11:00:36.702412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.885 [2024-07-10 11:00:36.702432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.885 [2024-07-10 11:00:36.702447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:19.885 [2024-07-10 11:00:36.702477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.885 qpair failed and we were unable to recover it. 00:30:20.143 [2024-07-10 11:00:36.712263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.143 [2024-07-10 11:00:36.712400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.143 [2024-07-10 11:00:36.712438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.143 [2024-07-10 11:00:36.712459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.143 [2024-07-10 11:00:36.712473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.143 [2024-07-10 11:00:36.712504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.143 qpair failed and we were unable to recover it. 00:30:20.143 [2024-07-10 11:00:36.722318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.143 [2024-07-10 11:00:36.722455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.143 [2024-07-10 11:00:36.722481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.143 [2024-07-10 11:00:36.722496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.143 [2024-07-10 11:00:36.722509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.143 [2024-07-10 11:00:36.722540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.143 qpair failed and we were unable to recover it. 00:30:20.143 [2024-07-10 11:00:36.732384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.143 [2024-07-10 11:00:36.732519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.143 [2024-07-10 11:00:36.732546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.143 [2024-07-10 11:00:36.732562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.143 [2024-07-10 11:00:36.732576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.143 [2024-07-10 11:00:36.732605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.143 qpair failed and we were unable to recover it. 00:30:20.143 [2024-07-10 11:00:36.742390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.143 [2024-07-10 11:00:36.742543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.143 [2024-07-10 11:00:36.742571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.143 [2024-07-10 11:00:36.742586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.143 [2024-07-10 11:00:36.742602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.143 [2024-07-10 11:00:36.742632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.143 qpair failed and we were unable to recover it. 00:30:20.143 [2024-07-10 11:00:36.752405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.143 [2024-07-10 11:00:36.752551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.143 [2024-07-10 11:00:36.752578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.143 [2024-07-10 11:00:36.752594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.143 [2024-07-10 11:00:36.752607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.143 [2024-07-10 11:00:36.752637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.143 qpair failed and we were unable to recover it. 00:30:20.143 [2024-07-10 11:00:36.762397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.143 [2024-07-10 11:00:36.762545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.143 [2024-07-10 11:00:36.762572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.143 [2024-07-10 11:00:36.762588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.143 [2024-07-10 11:00:36.762602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.143 [2024-07-10 11:00:36.762631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.143 qpair failed and we were unable to recover it. 00:30:20.143 [2024-07-10 11:00:36.772446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.143 [2024-07-10 11:00:36.772581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.772610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.772629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.772643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.772674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.782454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.782624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.782652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.782667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.782686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.782716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.792526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.792692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.792719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.792735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.792748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.792777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.802535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.802665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.802691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.802706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.802720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.802748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.812540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.812678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.812704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.812720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.812734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.812763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.822575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.822698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.822724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.822740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.822754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.822783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.832617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.832762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.832788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.832804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.832818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.832847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.842663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.842797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.842822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.842837] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.842851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.842879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.852680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.852839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.852865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.852880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.852893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.852924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.862734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.862900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.862926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.862941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.862955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.862984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.872752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.872885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.872911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.872933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.872948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.872978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.882742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.882867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.882893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.882908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.144 [2024-07-10 11:00:36.882923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.144 [2024-07-10 11:00:36.882952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.144 qpair failed and we were unable to recover it. 00:30:20.144 [2024-07-10 11:00:36.892790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.144 [2024-07-10 11:00:36.892920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.144 [2024-07-10 11:00:36.892946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.144 [2024-07-10 11:00:36.892961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.145 [2024-07-10 11:00:36.892975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.145 [2024-07-10 11:00:36.893005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.145 qpair failed and we were unable to recover it. 00:30:20.145 [2024-07-10 11:00:36.902808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.145 [2024-07-10 11:00:36.902936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.145 [2024-07-10 11:00:36.902963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.145 [2024-07-10 11:00:36.902981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.145 [2024-07-10 11:00:36.902996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.145 [2024-07-10 11:00:36.903027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.145 qpair failed and we were unable to recover it. 00:30:20.145 [2024-07-10 11:00:36.912844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.145 [2024-07-10 11:00:36.912982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.145 [2024-07-10 11:00:36.913008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.145 [2024-07-10 11:00:36.913023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.145 [2024-07-10 11:00:36.913037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.145 [2024-07-10 11:00:36.913066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.145 qpair failed and we were unable to recover it. 00:30:20.145 [2024-07-10 11:00:36.922852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.145 [2024-07-10 11:00:36.922981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.145 [2024-07-10 11:00:36.923007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.145 [2024-07-10 11:00:36.923021] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.145 [2024-07-10 11:00:36.923036] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.145 [2024-07-10 11:00:36.923065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.145 qpair failed and we were unable to recover it. 00:30:20.145 [2024-07-10 11:00:36.932889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.145 [2024-07-10 11:00:36.933045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.145 [2024-07-10 11:00:36.933070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.145 [2024-07-10 11:00:36.933085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.145 [2024-07-10 11:00:36.933098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.145 [2024-07-10 11:00:36.933128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.145 qpair failed and we were unable to recover it. 00:30:20.145 [2024-07-10 11:00:36.942928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.145 [2024-07-10 11:00:36.943065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.145 [2024-07-10 11:00:36.943091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.145 [2024-07-10 11:00:36.943106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.145 [2024-07-10 11:00:36.943119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.145 [2024-07-10 11:00:36.943147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.145 qpair failed and we were unable to recover it. 00:30:20.145 [2024-07-10 11:00:36.953038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.145 [2024-07-10 11:00:36.953187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.145 [2024-07-10 11:00:36.953213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.145 [2024-07-10 11:00:36.953228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.145 [2024-07-10 11:00:36.953242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.145 [2024-07-10 11:00:36.953271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.145 qpair failed and we were unable to recover it. 00:30:20.145 [2024-07-10 11:00:36.963079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.145 [2024-07-10 11:00:36.963245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.145 [2024-07-10 11:00:36.963273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.145 [2024-07-10 11:00:36.963295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.145 [2024-07-10 11:00:36.963310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.145 [2024-07-10 11:00:36.963341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.145 qpair failed and we were unable to recover it. 00:30:20.404 [2024-07-10 11:00:36.973020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.404 [2024-07-10 11:00:36.973149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.404 [2024-07-10 11:00:36.973176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.404 [2024-07-10 11:00:36.973192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.404 [2024-07-10 11:00:36.973207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.404 [2024-07-10 11:00:36.973237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.404 qpair failed and we were unable to recover it. 00:30:20.404 [2024-07-10 11:00:36.983081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.404 [2024-07-10 11:00:36.983227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.404 [2024-07-10 11:00:36.983253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.404 [2024-07-10 11:00:36.983268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.404 [2024-07-10 11:00:36.983283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.404 [2024-07-10 11:00:36.983312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.404 qpair failed and we were unable to recover it. 00:30:20.404 [2024-07-10 11:00:36.993099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.404 [2024-07-10 11:00:36.993270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.404 [2024-07-10 11:00:36.993295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.404 [2024-07-10 11:00:36.993310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.404 [2024-07-10 11:00:36.993324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.404 [2024-07-10 11:00:36.993353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.404 qpair failed and we were unable to recover it. 00:30:20.404 [2024-07-10 11:00:37.003144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.404 [2024-07-10 11:00:37.003293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.404 [2024-07-10 11:00:37.003318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.404 [2024-07-10 11:00:37.003333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.405 [2024-07-10 11:00:37.003347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.405 [2024-07-10 11:00:37.003375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.405 qpair failed and we were unable to recover it. 00:30:20.405 [2024-07-10 11:00:37.013154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.405 [2024-07-10 11:00:37.013296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.405 [2024-07-10 11:00:37.013322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.405 [2024-07-10 11:00:37.013336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.405 [2024-07-10 11:00:37.013350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.405 [2024-07-10 11:00:37.013379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.405 qpair failed and we were unable to recover it. 00:30:20.405 [2024-07-10 11:00:37.023177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.405 [2024-07-10 11:00:37.023318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.405 [2024-07-10 11:00:37.023344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.405 [2024-07-10 11:00:37.023359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.405 [2024-07-10 11:00:37.023373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.405 [2024-07-10 11:00:37.023402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.405 qpair failed and we were unable to recover it. 00:30:20.405 [2024-07-10 11:00:37.033207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.405 [2024-07-10 11:00:37.033341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.405 [2024-07-10 11:00:37.033368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.405 [2024-07-10 11:00:37.033382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.405 [2024-07-10 11:00:37.033396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.405 [2024-07-10 11:00:37.033432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.405 qpair failed and we were unable to recover it. 00:30:20.405 [2024-07-10 11:00:37.043259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.405 [2024-07-10 11:00:37.043438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.405 [2024-07-10 11:00:37.043465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.405 [2024-07-10 11:00:37.043480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.405 [2024-07-10 11:00:37.043494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.405 [2024-07-10 11:00:37.043522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.405 qpair failed and we were unable to recover it. 00:30:20.405 [2024-07-10 11:00:37.053238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.405 [2024-07-10 11:00:37.053364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.405 [2024-07-10 11:00:37.053390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.405 [2024-07-10 11:00:37.053422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.405 [2024-07-10 11:00:37.053446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.405 [2024-07-10 11:00:37.053477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.405 qpair failed and we were unable to recover it. 00:30:20.405 [2024-07-10 11:00:37.063293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.405 [2024-07-10 11:00:37.063488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.405 [2024-07-10 11:00:37.063515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.405 [2024-07-10 11:00:37.063530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.405 [2024-07-10 11:00:37.063544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.405 [2024-07-10 11:00:37.063574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.405 qpair failed and we were unable to recover it. 00:30:20.405 [2024-07-10 11:00:37.073456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.405 [2024-07-10 11:00:37.073614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.405 [2024-07-10 11:00:37.073640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.405 [2024-07-10 11:00:37.073655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.405 [2024-07-10 11:00:37.073669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.405 [2024-07-10 11:00:37.073698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.405 qpair failed and we were unable to recover it. 00:30:20.405 [2024-07-10 11:00:37.083356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.405 [2024-07-10 11:00:37.083500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.405 [2024-07-10 11:00:37.083526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.405 [2024-07-10 11:00:37.083541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.405 [2024-07-10 11:00:37.083555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.405 [2024-07-10 11:00:37.083583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.405 qpair failed and we were unable to recover it. 00:30:20.405 [2024-07-10 11:00:37.093390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.405 [2024-07-10 11:00:37.093535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.405 [2024-07-10 11:00:37.093561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.406 [2024-07-10 11:00:37.093575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.406 [2024-07-10 11:00:37.093589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.406 [2024-07-10 11:00:37.093618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.406 qpair failed and we were unable to recover it. 00:30:20.406 [2024-07-10 11:00:37.103470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.406 [2024-07-10 11:00:37.103601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.406 [2024-07-10 11:00:37.103627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.406 [2024-07-10 11:00:37.103642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.406 [2024-07-10 11:00:37.103656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.406 [2024-07-10 11:00:37.103685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.406 qpair failed and we were unable to recover it. 00:30:20.406 [2024-07-10 11:00:37.113456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.406 [2024-07-10 11:00:37.113589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.406 [2024-07-10 11:00:37.113615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.406 [2024-07-10 11:00:37.113633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.406 [2024-07-10 11:00:37.113648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.406 [2024-07-10 11:00:37.113677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.406 qpair failed and we were unable to recover it. 00:30:20.406 [2024-07-10 11:00:37.123463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.406 [2024-07-10 11:00:37.123608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.406 [2024-07-10 11:00:37.123634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.406 [2024-07-10 11:00:37.123649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.406 [2024-07-10 11:00:37.123663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.406 [2024-07-10 11:00:37.123692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.406 qpair failed and we were unable to recover it. 00:30:20.406 [2024-07-10 11:00:37.133486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.406 [2024-07-10 11:00:37.133616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.406 [2024-07-10 11:00:37.133642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.406 [2024-07-10 11:00:37.133657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.406 [2024-07-10 11:00:37.133671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.406 [2024-07-10 11:00:37.133699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.406 qpair failed and we were unable to recover it. 00:30:20.406 [2024-07-10 11:00:37.143562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.406 [2024-07-10 11:00:37.143716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.406 [2024-07-10 11:00:37.143748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.406 [2024-07-10 11:00:37.143770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.406 [2024-07-10 11:00:37.143784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.406 [2024-07-10 11:00:37.143814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.406 qpair failed and we were unable to recover it. 00:30:20.406 [2024-07-10 11:00:37.153625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.406 [2024-07-10 11:00:37.153781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.406 [2024-07-10 11:00:37.153814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.406 [2024-07-10 11:00:37.153829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.406 [2024-07-10 11:00:37.153844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.406 [2024-07-10 11:00:37.153873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.406 qpair failed and we were unable to recover it. 00:30:20.406 [2024-07-10 11:00:37.163583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.406 [2024-07-10 11:00:37.163711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.406 [2024-07-10 11:00:37.163736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.406 [2024-07-10 11:00:37.163750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.406 [2024-07-10 11:00:37.163764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.406 [2024-07-10 11:00:37.163793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.406 qpair failed and we were unable to recover it. 00:30:20.406 [2024-07-10 11:00:37.173625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.406 [2024-07-10 11:00:37.173804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.406 [2024-07-10 11:00:37.173831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.406 [2024-07-10 11:00:37.173846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.406 [2024-07-10 11:00:37.173859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.406 [2024-07-10 11:00:37.173888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.406 qpair failed and we were unable to recover it. 00:30:20.406 [2024-07-10 11:00:37.183634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.406 [2024-07-10 11:00:37.183773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.407 [2024-07-10 11:00:37.183799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.407 [2024-07-10 11:00:37.183814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.407 [2024-07-10 11:00:37.183827] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.407 [2024-07-10 11:00:37.183856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-10 11:00:37.193673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.407 [2024-07-10 11:00:37.193815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.407 [2024-07-10 11:00:37.193840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.407 [2024-07-10 11:00:37.193855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.407 [2024-07-10 11:00:37.193869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.407 [2024-07-10 11:00:37.193897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-10 11:00:37.203768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.407 [2024-07-10 11:00:37.203927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.407 [2024-07-10 11:00:37.203953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.407 [2024-07-10 11:00:37.203967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.407 [2024-07-10 11:00:37.203982] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.407 [2024-07-10 11:00:37.204010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-10 11:00:37.213753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.407 [2024-07-10 11:00:37.213875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.407 [2024-07-10 11:00:37.213901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.407 [2024-07-10 11:00:37.213916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.407 [2024-07-10 11:00:37.213930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.407 [2024-07-10 11:00:37.213959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.407 [2024-07-10 11:00:37.223847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.407 [2024-07-10 11:00:37.223990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.407 [2024-07-10 11:00:37.224031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.407 [2024-07-10 11:00:37.224050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.407 [2024-07-10 11:00:37.224065] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.407 [2024-07-10 11:00:37.224107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.407 qpair failed and we were unable to recover it. 00:30:20.664 [2024-07-10 11:00:37.233825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.664 [2024-07-10 11:00:37.234022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.664 [2024-07-10 11:00:37.234075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.664 [2024-07-10 11:00:37.234107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.664 [2024-07-10 11:00:37.234141] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.664 [2024-07-10 11:00:37.234189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.664 qpair failed and we were unable to recover it. 00:30:20.664 [2024-07-10 11:00:37.243862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.664 [2024-07-10 11:00:37.243992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.244020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.244036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.244050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.244079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.253925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.254058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.254085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.254099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.254114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.254143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.263868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.264005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.264031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.264046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.264060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.264089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.273916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.274055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.274081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.274095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.274108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.274138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.283915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.284052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.284078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.284094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.284107] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.284137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.293945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.294074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.294100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.294115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.294129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.294158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.303975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.304109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.304134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.304149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.304163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.304192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.314009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.314145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.314170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.314185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.314199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.314227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.324027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.324160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.324190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.324206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.324220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.324248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.334080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.334216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.334240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.334254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.334267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.334295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.344083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.344213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.344239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.344255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.344270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.344298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.354112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.354241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.354267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.354282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.354296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.354325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.364180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.364316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.364342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.364357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.364371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.364405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.374165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.374305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.374332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.374347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.374361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.374390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.384203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.384338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.384364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.384378] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.384393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.384422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.394249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.394392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.394417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.394442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.394456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.394486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.404290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.404441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.404467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.404482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.404496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.404525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.414265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.414396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.414434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.414453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.414467] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.414498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.424308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.424445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.424471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.424486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.424501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.424529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.434371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.434544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.434569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.434584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.434598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.434627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.444370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.444509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.444535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.444550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.665 [2024-07-10 11:00:37.444564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.665 [2024-07-10 11:00:37.444593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.665 qpair failed and we were unable to recover it. 00:30:20.665 [2024-07-10 11:00:37.454378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.665 [2024-07-10 11:00:37.454520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.665 [2024-07-10 11:00:37.454546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.665 [2024-07-10 11:00:37.454561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.666 [2024-07-10 11:00:37.454575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.666 [2024-07-10 11:00:37.454609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.666 qpair failed and we were unable to recover it. 00:30:20.666 [2024-07-10 11:00:37.464445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.666 [2024-07-10 11:00:37.464611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.666 [2024-07-10 11:00:37.464637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.666 [2024-07-10 11:00:37.464651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.666 [2024-07-10 11:00:37.464665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.666 [2024-07-10 11:00:37.464694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.666 qpair failed and we were unable to recover it. 00:30:20.666 [2024-07-10 11:00:37.474496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.666 [2024-07-10 11:00:37.474654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.666 [2024-07-10 11:00:37.474680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.666 [2024-07-10 11:00:37.474695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.666 [2024-07-10 11:00:37.474709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.666 [2024-07-10 11:00:37.474738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.666 qpair failed and we were unable to recover it. 00:30:20.666 [2024-07-10 11:00:37.484548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.666 [2024-07-10 11:00:37.484679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.666 [2024-07-10 11:00:37.484705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.666 [2024-07-10 11:00:37.484720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.666 [2024-07-10 11:00:37.484735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.666 [2024-07-10 11:00:37.484764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.666 qpair failed and we were unable to recover it. 00:30:20.924 [2024-07-10 11:00:37.494518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.924 [2024-07-10 11:00:37.494654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.924 [2024-07-10 11:00:37.494682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.924 [2024-07-10 11:00:37.494698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.924 [2024-07-10 11:00:37.494712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.924 [2024-07-10 11:00:37.494742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.924 qpair failed and we were unable to recover it. 00:30:20.924 [2024-07-10 11:00:37.504537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.924 [2024-07-10 11:00:37.504665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.924 [2024-07-10 11:00:37.504697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.924 [2024-07-10 11:00:37.504713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.924 [2024-07-10 11:00:37.504727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.924 [2024-07-10 11:00:37.504756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.924 qpair failed and we were unable to recover it. 00:30:20.924 [2024-07-10 11:00:37.514618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.924 [2024-07-10 11:00:37.514793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.924 [2024-07-10 11:00:37.514819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.924 [2024-07-10 11:00:37.514834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.924 [2024-07-10 11:00:37.514848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.924 [2024-07-10 11:00:37.514877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.924 qpair failed and we were unable to recover it. 00:30:20.924 [2024-07-10 11:00:37.524643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.924 [2024-07-10 11:00:37.524781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.924 [2024-07-10 11:00:37.524806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.924 [2024-07-10 11:00:37.524821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.924 [2024-07-10 11:00:37.524835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.924 [2024-07-10 11:00:37.524864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.924 qpair failed and we were unable to recover it. 00:30:20.924 [2024-07-10 11:00:37.534614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.924 [2024-07-10 11:00:37.534747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.924 [2024-07-10 11:00:37.534773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.924 [2024-07-10 11:00:37.534787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.924 [2024-07-10 11:00:37.534802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.924 [2024-07-10 11:00:37.534831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.924 qpair failed and we were unable to recover it. 00:30:20.924 [2024-07-10 11:00:37.544682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.924 [2024-07-10 11:00:37.544809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.924 [2024-07-10 11:00:37.544834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.924 [2024-07-10 11:00:37.544849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.924 [2024-07-10 11:00:37.544863] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.924 [2024-07-10 11:00:37.544899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.924 qpair failed and we were unable to recover it. 00:30:20.924 [2024-07-10 11:00:37.554719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.924 [2024-07-10 11:00:37.554858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.924 [2024-07-10 11:00:37.554883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.924 [2024-07-10 11:00:37.554898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.924 [2024-07-10 11:00:37.554912] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.924 [2024-07-10 11:00:37.554941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.924 qpair failed and we were unable to recover it. 00:30:20.924 [2024-07-10 11:00:37.564776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.924 [2024-07-10 11:00:37.564908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.924 [2024-07-10 11:00:37.564933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.924 [2024-07-10 11:00:37.564949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.924 [2024-07-10 11:00:37.564963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.924 [2024-07-10 11:00:37.564991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.924 qpair failed and we were unable to recover it. 00:30:20.924 [2024-07-10 11:00:37.574758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.924 [2024-07-10 11:00:37.574930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.924 [2024-07-10 11:00:37.574955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.924 [2024-07-10 11:00:37.574970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.924 [2024-07-10 11:00:37.575000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.924 [2024-07-10 11:00:37.575028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.924 qpair failed and we were unable to recover it. 00:30:20.924 [2024-07-10 11:00:37.584777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.924 [2024-07-10 11:00:37.584908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.924 [2024-07-10 11:00:37.584933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.924 [2024-07-10 11:00:37.584948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.584963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.584991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.594809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.925 [2024-07-10 11:00:37.594983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.925 [2024-07-10 11:00:37.595013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.925 [2024-07-10 11:00:37.595029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.595043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.595073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.604912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.925 [2024-07-10 11:00:37.605045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.925 [2024-07-10 11:00:37.605071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.925 [2024-07-10 11:00:37.605086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.605100] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.605129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.614897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.925 [2024-07-10 11:00:37.615041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.925 [2024-07-10 11:00:37.615066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.925 [2024-07-10 11:00:37.615097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.615111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.615139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.624889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.925 [2024-07-10 11:00:37.625067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.925 [2024-07-10 11:00:37.625093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.925 [2024-07-10 11:00:37.625108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.625122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.625151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.634954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.925 [2024-07-10 11:00:37.635095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.925 [2024-07-10 11:00:37.635120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.925 [2024-07-10 11:00:37.635136] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.635155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.635185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.644965] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.925 [2024-07-10 11:00:37.645134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.925 [2024-07-10 11:00:37.645159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.925 [2024-07-10 11:00:37.645174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.645188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.645216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.654991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.925 [2024-07-10 11:00:37.655125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.925 [2024-07-10 11:00:37.655150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.925 [2024-07-10 11:00:37.655166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.655180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.655224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.665090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.925 [2024-07-10 11:00:37.665236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.925 [2024-07-10 11:00:37.665261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.925 [2024-07-10 11:00:37.665276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.665290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.665318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.675071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.925 [2024-07-10 11:00:37.675216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.925 [2024-07-10 11:00:37.675241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.925 [2024-07-10 11:00:37.675255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.675270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.675299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.685058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.925 [2024-07-10 11:00:37.685194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.925 [2024-07-10 11:00:37.685224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.925 [2024-07-10 11:00:37.685240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.685254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.685283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.695095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.925 [2024-07-10 11:00:37.695231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.925 [2024-07-10 11:00:37.695257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.925 [2024-07-10 11:00:37.695272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.925 [2024-07-10 11:00:37.695286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.925 [2024-07-10 11:00:37.695315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.925 qpair failed and we were unable to recover it. 00:30:20.925 [2024-07-10 11:00:37.705149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.926 [2024-07-10 11:00:37.705285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.926 [2024-07-10 11:00:37.705310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.926 [2024-07-10 11:00:37.705325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.926 [2024-07-10 11:00:37.705339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.926 [2024-07-10 11:00:37.705368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.926 qpair failed and we were unable to recover it. 00:30:20.926 [2024-07-10 11:00:37.715153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.926 [2024-07-10 11:00:37.715289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.926 [2024-07-10 11:00:37.715314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.926 [2024-07-10 11:00:37.715330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.926 [2024-07-10 11:00:37.715344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.926 [2024-07-10 11:00:37.715373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.926 qpair failed and we were unable to recover it. 00:30:20.926 [2024-07-10 11:00:37.725166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.926 [2024-07-10 11:00:37.725313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.926 [2024-07-10 11:00:37.725339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.926 [2024-07-10 11:00:37.725354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.926 [2024-07-10 11:00:37.725373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.926 [2024-07-10 11:00:37.725402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.926 qpair failed and we were unable to recover it. 00:30:20.926 [2024-07-10 11:00:37.735176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.926 [2024-07-10 11:00:37.735317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.926 [2024-07-10 11:00:37.735343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.926 [2024-07-10 11:00:37.735357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.926 [2024-07-10 11:00:37.735371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.926 [2024-07-10 11:00:37.735400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.926 qpair failed and we were unable to recover it. 00:30:20.926 [2024-07-10 11:00:37.745209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.926 [2024-07-10 11:00:37.745344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.926 [2024-07-10 11:00:37.745371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.926 [2024-07-10 11:00:37.745386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.926 [2024-07-10 11:00:37.745400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:20.926 [2024-07-10 11:00:37.745447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:20.926 qpair failed and we were unable to recover it. 00:30:21.184 [2024-07-10 11:00:37.755274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.184 [2024-07-10 11:00:37.755419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.184 [2024-07-10 11:00:37.755457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.184 [2024-07-10 11:00:37.755475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.184 [2024-07-10 11:00:37.755490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.184 [2024-07-10 11:00:37.755522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.184 qpair failed and we were unable to recover it. 00:30:21.184 [2024-07-10 11:00:37.765300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.185 [2024-07-10 11:00:37.765437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.185 [2024-07-10 11:00:37.765464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.185 [2024-07-10 11:00:37.765479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.185 [2024-07-10 11:00:37.765493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.185 [2024-07-10 11:00:37.765522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.185 qpair failed and we were unable to recover it. 00:30:21.185 [2024-07-10 11:00:37.775342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.185 [2024-07-10 11:00:37.775498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.185 [2024-07-10 11:00:37.775525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.185 [2024-07-10 11:00:37.775540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.185 [2024-07-10 11:00:37.775554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.185 [2024-07-10 11:00:37.775584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.185 qpair failed and we were unable to recover it. 00:30:21.185 [2024-07-10 11:00:37.785352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.185 [2024-07-10 11:00:37.785498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.185 [2024-07-10 11:00:37.785523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.185 [2024-07-10 11:00:37.785538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.185 [2024-07-10 11:00:37.785552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.185 [2024-07-10 11:00:37.785580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.185 qpair failed and we were unable to recover it. 00:30:21.185 [2024-07-10 11:00:37.795443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.185 [2024-07-10 11:00:37.795653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.185 [2024-07-10 11:00:37.795681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.185 [2024-07-10 11:00:37.795696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.185 [2024-07-10 11:00:37.795709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.185 [2024-07-10 11:00:37.795740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.185 qpair failed and we were unable to recover it. 00:30:21.185 [2024-07-10 11:00:37.805412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.185 [2024-07-10 11:00:37.805556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.185 [2024-07-10 11:00:37.805583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.185 [2024-07-10 11:00:37.805598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.185 [2024-07-10 11:00:37.805611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.185 [2024-07-10 11:00:37.805640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.185 qpair failed and we were unable to recover it. 00:30:21.185 [2024-07-10 11:00:37.815423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.185 [2024-07-10 11:00:37.815554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.185 [2024-07-10 11:00:37.815579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.185 [2024-07-10 11:00:37.815594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.185 [2024-07-10 11:00:37.815614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.185 [2024-07-10 11:00:37.815643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.185 qpair failed and we were unable to recover it. 00:30:21.185 [2024-07-10 11:00:37.825499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.185 [2024-07-10 11:00:37.825624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.185 [2024-07-10 11:00:37.825653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.185 [2024-07-10 11:00:37.825671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.185 [2024-07-10 11:00:37.825684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.185 [2024-07-10 11:00:37.825715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.185 qpair failed and we were unable to recover it. 00:30:21.185 [2024-07-10 11:00:37.835625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.185 [2024-07-10 11:00:37.835757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.185 [2024-07-10 11:00:37.835783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.185 [2024-07-10 11:00:37.835798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.185 [2024-07-10 11:00:37.835813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.185 [2024-07-10 11:00:37.835842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.185 qpair failed and we were unable to recover it. 00:30:21.185 [2024-07-10 11:00:37.845594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.185 [2024-07-10 11:00:37.845739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.185 [2024-07-10 11:00:37.845765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.185 [2024-07-10 11:00:37.845780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.185 [2024-07-10 11:00:37.845794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.185 [2024-07-10 11:00:37.845823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.185 qpair failed and we were unable to recover it. 00:30:21.185 [2024-07-10 11:00:37.855553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.185 [2024-07-10 11:00:37.855683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.185 [2024-07-10 11:00:37.855709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.185 [2024-07-10 11:00:37.855725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.185 [2024-07-10 11:00:37.855738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.185 [2024-07-10 11:00:37.855767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.185 qpair failed and we were unable to recover it. 00:30:21.185 [2024-07-10 11:00:37.865609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.185 [2024-07-10 11:00:37.865741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.185 [2024-07-10 11:00:37.865777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.186 [2024-07-10 11:00:37.865793] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.186 [2024-07-10 11:00:37.865806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.186 [2024-07-10 11:00:37.865835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.186 qpair failed and we were unable to recover it. 00:30:21.186 [2024-07-10 11:00:37.875637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.186 [2024-07-10 11:00:37.875776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.186 [2024-07-10 11:00:37.875802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.186 [2024-07-10 11:00:37.875817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.186 [2024-07-10 11:00:37.875831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.186 [2024-07-10 11:00:37.875860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.186 qpair failed and we were unable to recover it. 00:30:21.186 [2024-07-10 11:00:37.885631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.186 [2024-07-10 11:00:37.885769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.186 [2024-07-10 11:00:37.885796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.186 [2024-07-10 11:00:37.885811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.186 [2024-07-10 11:00:37.885825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.186 [2024-07-10 11:00:37.885854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.186 qpair failed and we were unable to recover it. 00:30:21.186 [2024-07-10 11:00:37.895658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.186 [2024-07-10 11:00:37.895786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.186 [2024-07-10 11:00:37.895812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.186 [2024-07-10 11:00:37.895827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.186 [2024-07-10 11:00:37.895840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.186 [2024-07-10 11:00:37.895869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.186 qpair failed and we were unable to recover it. 00:30:21.186 [2024-07-10 11:00:37.905706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.186 [2024-07-10 11:00:37.905838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.186 [2024-07-10 11:00:37.905864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.186 [2024-07-10 11:00:37.905880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.186 [2024-07-10 11:00:37.905899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.186 [2024-07-10 11:00:37.905945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.186 qpair failed and we were unable to recover it. 00:30:21.186 [2024-07-10 11:00:37.915727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.186 [2024-07-10 11:00:37.915857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.186 [2024-07-10 11:00:37.915882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.186 [2024-07-10 11:00:37.915898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.186 [2024-07-10 11:00:37.915912] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.186 [2024-07-10 11:00:37.915941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.186 qpair failed and we were unable to recover it. 00:30:21.186 [2024-07-10 11:00:37.925733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.186 [2024-07-10 11:00:37.925866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.186 [2024-07-10 11:00:37.925894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.186 [2024-07-10 11:00:37.925910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.186 [2024-07-10 11:00:37.925923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.186 [2024-07-10 11:00:37.925952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.186 qpair failed and we were unable to recover it. 00:30:21.186 [2024-07-10 11:00:37.935800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.186 [2024-07-10 11:00:37.935925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.186 [2024-07-10 11:00:37.935952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.186 [2024-07-10 11:00:37.935968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.186 [2024-07-10 11:00:37.935982] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.186 [2024-07-10 11:00:37.936010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.186 qpair failed and we were unable to recover it. 00:30:21.186 [2024-07-10 11:00:37.945874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.186 [2024-07-10 11:00:37.946009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.186 [2024-07-10 11:00:37.946035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.186 [2024-07-10 11:00:37.946050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.186 [2024-07-10 11:00:37.946064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.186 [2024-07-10 11:00:37.946109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.186 qpair failed and we were unable to recover it. 00:30:21.186 [2024-07-10 11:00:37.955828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.186 [2024-07-10 11:00:37.955968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.186 [2024-07-10 11:00:37.955994] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.186 [2024-07-10 11:00:37.956009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.186 [2024-07-10 11:00:37.956023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.186 [2024-07-10 11:00:37.956052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.186 qpair failed and we were unable to recover it. 00:30:21.186 [2024-07-10 11:00:37.965908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.186 [2024-07-10 11:00:37.966058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.186 [2024-07-10 11:00:37.966085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.186 [2024-07-10 11:00:37.966100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.186 [2024-07-10 11:00:37.966114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.186 [2024-07-10 11:00:37.966142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.186 qpair failed and we were unable to recover it. 00:30:21.186 [2024-07-10 11:00:37.975885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.187 [2024-07-10 11:00:37.976043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.187 [2024-07-10 11:00:37.976069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.187 [2024-07-10 11:00:37.976085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.187 [2024-07-10 11:00:37.976099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.187 [2024-07-10 11:00:37.976127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.187 qpair failed and we were unable to recover it. 00:30:21.187 [2024-07-10 11:00:37.985974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.187 [2024-07-10 11:00:37.986147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.187 [2024-07-10 11:00:37.986173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.187 [2024-07-10 11:00:37.986188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.187 [2024-07-10 11:00:37.986202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.187 [2024-07-10 11:00:37.986231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.187 qpair failed and we were unable to recover it. 00:30:21.187 [2024-07-10 11:00:37.995968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.187 [2024-07-10 11:00:37.996096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.187 [2024-07-10 11:00:37.996120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.187 [2024-07-10 11:00:37.996141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.187 [2024-07-10 11:00:37.996156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.187 [2024-07-10 11:00:37.996185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.187 qpair failed and we were unable to recover it. 00:30:21.187 [2024-07-10 11:00:38.006016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.187 [2024-07-10 11:00:38.006151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.187 [2024-07-10 11:00:38.006183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.187 [2024-07-10 11:00:38.006200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.187 [2024-07-10 11:00:38.006214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.187 [2024-07-10 11:00:38.006245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.187 qpair failed and we were unable to recover it. 00:30:21.446 [2024-07-10 11:00:38.016023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.446 [2024-07-10 11:00:38.016158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.446 [2024-07-10 11:00:38.016189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.446 [2024-07-10 11:00:38.016206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.446 [2024-07-10 11:00:38.016221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.446 [2024-07-10 11:00:38.016266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.446 qpair failed and we were unable to recover it. 00:30:21.446 [2024-07-10 11:00:38.026025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.446 [2024-07-10 11:00:38.026149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.446 [2024-07-10 11:00:38.026176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.446 [2024-07-10 11:00:38.026191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.446 [2024-07-10 11:00:38.026205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.446 [2024-07-10 11:00:38.026235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.446 qpair failed and we were unable to recover it. 00:30:21.446 [2024-07-10 11:00:38.036068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.446 [2024-07-10 11:00:38.036196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.446 [2024-07-10 11:00:38.036223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.446 [2024-07-10 11:00:38.036238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.446 [2024-07-10 11:00:38.036253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.446 [2024-07-10 11:00:38.036282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.446 qpair failed and we were unable to recover it. 00:30:21.446 [2024-07-10 11:00:38.046108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.446 [2024-07-10 11:00:38.046274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.446 [2024-07-10 11:00:38.046301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.446 [2024-07-10 11:00:38.046316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.446 [2024-07-10 11:00:38.046330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.446 [2024-07-10 11:00:38.046359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.446 qpair failed and we were unable to recover it. 00:30:21.446 [2024-07-10 11:00:38.056124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.446 [2024-07-10 11:00:38.056259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.446 [2024-07-10 11:00:38.056286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.446 [2024-07-10 11:00:38.056302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.446 [2024-07-10 11:00:38.056317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.446 [2024-07-10 11:00:38.056345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.446 qpair failed and we were unable to recover it. 00:30:21.446 [2024-07-10 11:00:38.066135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.446 [2024-07-10 11:00:38.066261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.446 [2024-07-10 11:00:38.066287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.446 [2024-07-10 11:00:38.066302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.446 [2024-07-10 11:00:38.066315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.446 [2024-07-10 11:00:38.066344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.446 qpair failed and we were unable to recover it. 00:30:21.446 [2024-07-10 11:00:38.076171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.446 [2024-07-10 11:00:38.076317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.446 [2024-07-10 11:00:38.076344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.446 [2024-07-10 11:00:38.076359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.446 [2024-07-10 11:00:38.076372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.446 [2024-07-10 11:00:38.076401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.446 qpair failed and we were unable to recover it. 00:30:21.446 [2024-07-10 11:00:38.086230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.446 [2024-07-10 11:00:38.086360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.446 [2024-07-10 11:00:38.086386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.446 [2024-07-10 11:00:38.086407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.446 [2024-07-10 11:00:38.086430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.446 [2024-07-10 11:00:38.086461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.446 qpair failed and we were unable to recover it. 00:30:21.446 [2024-07-10 11:00:38.096225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.446 [2024-07-10 11:00:38.096360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.446 [2024-07-10 11:00:38.096386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.446 [2024-07-10 11:00:38.096402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.446 [2024-07-10 11:00:38.096417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.446 [2024-07-10 11:00:38.096464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.446 qpair failed and we were unable to recover it. 00:30:21.446 [2024-07-10 11:00:38.106255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.446 [2024-07-10 11:00:38.106383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.106409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.106436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.106454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.106483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.116302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.116438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.116471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.116487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.116501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.116530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.126304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.126436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.126476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.126492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.126505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.126536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.136331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.136465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.136492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.136507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.136521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.136549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.146373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.146502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.146528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.146547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.146560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.146589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.156418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.156565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.156591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.156607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.156620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.156648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.166462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.166644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.166670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.166685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.166698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.166727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.176500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.176637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.176663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.176684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.176698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.176727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.186506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.186637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.186663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.186678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.186691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.186720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.196574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.196709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.196735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.196751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.196764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.196792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.206550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.206704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.206729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.206744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.206756] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.206785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.216600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.216725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.216750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.216765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.447 [2024-07-10 11:00:38.216778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.447 [2024-07-10 11:00:38.216808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.447 qpair failed and we were unable to recover it. 00:30:21.447 [2024-07-10 11:00:38.226683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.447 [2024-07-10 11:00:38.226815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.447 [2024-07-10 11:00:38.226840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.447 [2024-07-10 11:00:38.226856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.448 [2024-07-10 11:00:38.226869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.448 [2024-07-10 11:00:38.226898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.448 qpair failed and we were unable to recover it. 00:30:21.448 [2024-07-10 11:00:38.236677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.448 [2024-07-10 11:00:38.236810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.448 [2024-07-10 11:00:38.236836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.448 [2024-07-10 11:00:38.236851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.448 [2024-07-10 11:00:38.236866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.448 [2024-07-10 11:00:38.236894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.448 qpair failed and we were unable to recover it. 00:30:21.448 [2024-07-10 11:00:38.246731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.448 [2024-07-10 11:00:38.246863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.448 [2024-07-10 11:00:38.246889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.448 [2024-07-10 11:00:38.246904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.448 [2024-07-10 11:00:38.246918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.448 [2024-07-10 11:00:38.246946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.448 qpair failed and we were unable to recover it. 00:30:21.448 [2024-07-10 11:00:38.256689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.448 [2024-07-10 11:00:38.256837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.448 [2024-07-10 11:00:38.256863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.448 [2024-07-10 11:00:38.256890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.448 [2024-07-10 11:00:38.256904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.448 [2024-07-10 11:00:38.256934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.448 qpair failed and we were unable to recover it. 00:30:21.448 [2024-07-10 11:00:38.266744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.448 [2024-07-10 11:00:38.266869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.448 [2024-07-10 11:00:38.266896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.448 [2024-07-10 11:00:38.266917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.448 [2024-07-10 11:00:38.266933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.448 [2024-07-10 11:00:38.266963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.448 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-10 11:00:38.276759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.707 [2024-07-10 11:00:38.276889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.707 [2024-07-10 11:00:38.276916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.707 [2024-07-10 11:00:38.276932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.707 [2024-07-10 11:00:38.276945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.707 [2024-07-10 11:00:38.276975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-10 11:00:38.286783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.707 [2024-07-10 11:00:38.286947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.707 [2024-07-10 11:00:38.286974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.707 [2024-07-10 11:00:38.286989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.707 [2024-07-10 11:00:38.287002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.707 [2024-07-10 11:00:38.287031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-10 11:00:38.296798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.707 [2024-07-10 11:00:38.296943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.707 [2024-07-10 11:00:38.296969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.707 [2024-07-10 11:00:38.296984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.707 [2024-07-10 11:00:38.296997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.707 [2024-07-10 11:00:38.297041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-10 11:00:38.306937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.708 [2024-07-10 11:00:38.307102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.708 [2024-07-10 11:00:38.307128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.708 [2024-07-10 11:00:38.307142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.708 [2024-07-10 11:00:38.307155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.708 [2024-07-10 11:00:38.307183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-10 11:00:38.316913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.708 [2024-07-10 11:00:38.317105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.708 [2024-07-10 11:00:38.317131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.708 [2024-07-10 11:00:38.317147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.708 [2024-07-10 11:00:38.317160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.708 [2024-07-10 11:00:38.317188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-10 11:00:38.326908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.708 [2024-07-10 11:00:38.327041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.708 [2024-07-10 11:00:38.327066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.708 [2024-07-10 11:00:38.327081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.708 [2024-07-10 11:00:38.327095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.708 [2024-07-10 11:00:38.327124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-10 11:00:38.336994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.708 [2024-07-10 11:00:38.337141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.708 [2024-07-10 11:00:38.337166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.708 [2024-07-10 11:00:38.337181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.708 [2024-07-10 11:00:38.337194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.708 [2024-07-10 11:00:38.337222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-10 11:00:38.347032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.708 [2024-07-10 11:00:38.347158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.708 [2024-07-10 11:00:38.347184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.708 [2024-07-10 11:00:38.347199] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.708 [2024-07-10 11:00:38.347212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.708 [2024-07-10 11:00:38.347243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-10 11:00:38.356978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.708 [2024-07-10 11:00:38.357107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.708 [2024-07-10 11:00:38.357132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.708 [2024-07-10 11:00:38.357153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.708 [2024-07-10 11:00:38.357167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.708 [2024-07-10 11:00:38.357198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-10 11:00:38.367018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.708 [2024-07-10 11:00:38.367146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.708 [2024-07-10 11:00:38.367171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.708 [2024-07-10 11:00:38.367186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.708 [2024-07-10 11:00:38.367200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.708 [2024-07-10 11:00:38.367228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-10 11:00:38.377058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.708 [2024-07-10 11:00:38.377193] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.708 [2024-07-10 11:00:38.377221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.708 [2024-07-10 11:00:38.377239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.708 [2024-07-10 11:00:38.377253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.708 [2024-07-10 11:00:38.377296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-10 11:00:38.387056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.708 [2024-07-10 11:00:38.387185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.708 [2024-07-10 11:00:38.387212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.708 [2024-07-10 11:00:38.387227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.708 [2024-07-10 11:00:38.387240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.708 [2024-07-10 11:00:38.387285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-10 11:00:38.397096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.708 [2024-07-10 11:00:38.397226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.708 [2024-07-10 11:00:38.397252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.708 [2024-07-10 11:00:38.397268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.708 [2024-07-10 11:00:38.397281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.708 [2024-07-10 11:00:38.397310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-10 11:00:38.407114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-07-10 11:00:38.407271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-07-10 11:00:38.407297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-07-10 11:00:38.407312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-07-10 11:00:38.407326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.709 [2024-07-10 11:00:38.407355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-10 11:00:38.417148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-07-10 11:00:38.417277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-07-10 11:00:38.417304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-07-10 11:00:38.417320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-07-10 11:00:38.417334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.709 [2024-07-10 11:00:38.417363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-10 11:00:38.427201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-07-10 11:00:38.427348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-07-10 11:00:38.427375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-07-10 11:00:38.427390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-07-10 11:00:38.427404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.709 [2024-07-10 11:00:38.427438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-10 11:00:38.437238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-07-10 11:00:38.437374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-07-10 11:00:38.437400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-07-10 11:00:38.437416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-07-10 11:00:38.437437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.709 [2024-07-10 11:00:38.437467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-10 11:00:38.447266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-07-10 11:00:38.447394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-07-10 11:00:38.447432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-07-10 11:00:38.447450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-07-10 11:00:38.447464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.709 [2024-07-10 11:00:38.447493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-10 11:00:38.457242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-07-10 11:00:38.457383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-07-10 11:00:38.457411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-07-10 11:00:38.457434] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-07-10 11:00:38.457450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.709 [2024-07-10 11:00:38.457479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-10 11:00:38.467318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-07-10 11:00:38.467454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-07-10 11:00:38.467479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-07-10 11:00:38.467494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-07-10 11:00:38.467508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.709 [2024-07-10 11:00:38.467537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-10 11:00:38.477337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-07-10 11:00:38.477518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-07-10 11:00:38.477545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-07-10 11:00:38.477560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-07-10 11:00:38.477574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.709 [2024-07-10 11:00:38.477603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-10 11:00:38.487349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-07-10 11:00:38.487481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-07-10 11:00:38.487507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-07-10 11:00:38.487522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-07-10 11:00:38.487535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.709 [2024-07-10 11:00:38.487565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-10 11:00:38.497409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.709 [2024-07-10 11:00:38.497594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.709 [2024-07-10 11:00:38.497622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.709 [2024-07-10 11:00:38.497637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.709 [2024-07-10 11:00:38.497651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.709 [2024-07-10 11:00:38.497680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-10 11:00:38.507413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-07-10 11:00:38.507544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-07-10 11:00:38.507570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-07-10 11:00:38.507585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-07-10 11:00:38.507599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.710 [2024-07-10 11:00:38.507628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-10 11:00:38.517463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-07-10 11:00:38.517630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-07-10 11:00:38.517657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-07-10 11:00:38.517673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-07-10 11:00:38.517690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.710 [2024-07-10 11:00:38.517721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-10 11:00:38.527485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.710 [2024-07-10 11:00:38.527615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.710 [2024-07-10 11:00:38.527643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.710 [2024-07-10 11:00:38.527660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.710 [2024-07-10 11:00:38.527673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.710 [2024-07-10 11:00:38.527703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.537493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.969 [2024-07-10 11:00:38.537631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.969 [2024-07-10 11:00:38.537664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.969 [2024-07-10 11:00:38.537682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.969 [2024-07-10 11:00:38.537696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.969 [2024-07-10 11:00:38.537727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.969 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.547540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.969 [2024-07-10 11:00:38.547667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.969 [2024-07-10 11:00:38.547695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.969 [2024-07-10 11:00:38.547710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.969 [2024-07-10 11:00:38.547724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.969 [2024-07-10 11:00:38.547752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.969 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.557591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.969 [2024-07-10 11:00:38.557729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.969 [2024-07-10 11:00:38.557755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.969 [2024-07-10 11:00:38.557771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.969 [2024-07-10 11:00:38.557784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.969 [2024-07-10 11:00:38.557813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.969 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.567591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.969 [2024-07-10 11:00:38.567719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.969 [2024-07-10 11:00:38.567753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.969 [2024-07-10 11:00:38.567768] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.969 [2024-07-10 11:00:38.567782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.969 [2024-07-10 11:00:38.567811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.969 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.577612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.969 [2024-07-10 11:00:38.577734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.969 [2024-07-10 11:00:38.577758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.969 [2024-07-10 11:00:38.577773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.969 [2024-07-10 11:00:38.577787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.969 [2024-07-10 11:00:38.577821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.969 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.587662] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.969 [2024-07-10 11:00:38.587788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.969 [2024-07-10 11:00:38.587814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.969 [2024-07-10 11:00:38.587829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.969 [2024-07-10 11:00:38.587843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.969 [2024-07-10 11:00:38.587872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.969 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.597674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.969 [2024-07-10 11:00:38.597817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.969 [2024-07-10 11:00:38.597843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.969 [2024-07-10 11:00:38.597859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.969 [2024-07-10 11:00:38.597873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.969 [2024-07-10 11:00:38.597902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.969 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.607737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.969 [2024-07-10 11:00:38.607885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.969 [2024-07-10 11:00:38.607912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.969 [2024-07-10 11:00:38.607927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.969 [2024-07-10 11:00:38.607941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.969 [2024-07-10 11:00:38.607969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.969 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.617725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.969 [2024-07-10 11:00:38.617850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.969 [2024-07-10 11:00:38.617875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.969 [2024-07-10 11:00:38.617890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.969 [2024-07-10 11:00:38.617903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.969 [2024-07-10 11:00:38.617931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.969 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.627751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.969 [2024-07-10 11:00:38.627876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.969 [2024-07-10 11:00:38.627908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.969 [2024-07-10 11:00:38.627924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.969 [2024-07-10 11:00:38.627937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.969 [2024-07-10 11:00:38.627966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.969 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.637785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.969 [2024-07-10 11:00:38.637931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.969 [2024-07-10 11:00:38.637960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.969 [2024-07-10 11:00:38.637977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.969 [2024-07-10 11:00:38.637990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.969 [2024-07-10 11:00:38.638020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.969 qpair failed and we were unable to recover it. 00:30:21.969 [2024-07-10 11:00:38.647808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.647941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.647967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.647981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.647995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.648023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.657832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.657989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.658015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.658031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.658060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.658089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.667859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.667986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.668011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.668027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.668040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.668074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.677901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.678032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.678058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.678073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.678087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.678116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.687942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.688117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.688143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.688158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.688172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.688202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.697981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.698107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.698133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.698148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.698162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.698190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.708011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.708179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.708206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.708222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.708251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.708279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.718058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.718190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.718221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.718238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.718251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.718281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.728082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.728224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.728252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.728271] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.728286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.728315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.738090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.738220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.738247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.738263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.738277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.738306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.748148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.748276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.748303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.748318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.748332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.748375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.758229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.758359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.758385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.758400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.970 [2024-07-10 11:00:38.758414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.970 [2024-07-10 11:00:38.758456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.970 qpair failed and we were unable to recover it. 00:30:21.970 [2024-07-10 11:00:38.768170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.970 [2024-07-10 11:00:38.768350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.970 [2024-07-10 11:00:38.768377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.970 [2024-07-10 11:00:38.768392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.971 [2024-07-10 11:00:38.768406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.971 [2024-07-10 11:00:38.768441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.971 qpair failed and we were unable to recover it. 00:30:21.971 [2024-07-10 11:00:38.778178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.971 [2024-07-10 11:00:38.778310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.971 [2024-07-10 11:00:38.778337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.971 [2024-07-10 11:00:38.778352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.971 [2024-07-10 11:00:38.778365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.971 [2024-07-10 11:00:38.778394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.971 qpair failed and we were unable to recover it. 00:30:21.971 [2024-07-10 11:00:38.788249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.971 [2024-07-10 11:00:38.788397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.971 [2024-07-10 11:00:38.788422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.971 [2024-07-10 11:00:38.788445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.971 [2024-07-10 11:00:38.788459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:21.971 [2024-07-10 11:00:38.788487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:21.971 qpair failed and we were unable to recover it. 00:30:22.229 [2024-07-10 11:00:38.798256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.229 [2024-07-10 11:00:38.798392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.229 [2024-07-10 11:00:38.798421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.229 [2024-07-10 11:00:38.798471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.229 [2024-07-10 11:00:38.798498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.229 [2024-07-10 11:00:38.798532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.229 qpair failed and we were unable to recover it. 00:30:22.229 [2024-07-10 11:00:38.808289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.229 [2024-07-10 11:00:38.808420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.229 [2024-07-10 11:00:38.808458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.229 [2024-07-10 11:00:38.808474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.229 [2024-07-10 11:00:38.808488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.230 [2024-07-10 11:00:38.808518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-07-10 11:00:38.818298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.230 [2024-07-10 11:00:38.818430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.230 [2024-07-10 11:00:38.818458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.230 [2024-07-10 11:00:38.818473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.230 [2024-07-10 11:00:38.818487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.230 [2024-07-10 11:00:38.818517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-07-10 11:00:38.828336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.230 [2024-07-10 11:00:38.828503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.230 [2024-07-10 11:00:38.828530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.230 [2024-07-10 11:00:38.828545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.230 [2024-07-10 11:00:38.828559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.230 [2024-07-10 11:00:38.828588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-07-10 11:00:38.838380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.230 [2024-07-10 11:00:38.838518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.230 [2024-07-10 11:00:38.838545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.230 [2024-07-10 11:00:38.838560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.230 [2024-07-10 11:00:38.838572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.230 [2024-07-10 11:00:38.838601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-07-10 11:00:38.848516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.230 [2024-07-10 11:00:38.848642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.230 [2024-07-10 11:00:38.848668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.230 [2024-07-10 11:00:38.848683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.230 [2024-07-10 11:00:38.848702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.230 [2024-07-10 11:00:38.848732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-07-10 11:00:38.858429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.230 [2024-07-10 11:00:38.858560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.230 [2024-07-10 11:00:38.858585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.230 [2024-07-10 11:00:38.858600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.230 [2024-07-10 11:00:38.858613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.230 [2024-07-10 11:00:38.858642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-07-10 11:00:38.868582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.230 [2024-07-10 11:00:38.868770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.230 [2024-07-10 11:00:38.868796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.230 [2024-07-10 11:00:38.868810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.230 [2024-07-10 11:00:38.868823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.230 [2024-07-10 11:00:38.868852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-07-10 11:00:38.878494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.230 [2024-07-10 11:00:38.878628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.230 [2024-07-10 11:00:38.878653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.230 [2024-07-10 11:00:38.878668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.230 [2024-07-10 11:00:38.878682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.230 [2024-07-10 11:00:38.878711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-07-10 11:00:38.888506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.230 [2024-07-10 11:00:38.888640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.230 [2024-07-10 11:00:38.888665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.230 [2024-07-10 11:00:38.888680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.230 [2024-07-10 11:00:38.888695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.230 [2024-07-10 11:00:38.888723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-07-10 11:00:38.898581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.230 [2024-07-10 11:00:38.898756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.230 [2024-07-10 11:00:38.898792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.230 [2024-07-10 11:00:38.898826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.230 [2024-07-10 11:00:38.898840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.230 [2024-07-10 11:00:38.898884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.230 qpair failed and we were unable to recover it. 00:30:22.230 [2024-07-10 11:00:38.908580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.230 [2024-07-10 11:00:38.908706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.230 [2024-07-10 11:00:38.908733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.230 [2024-07-10 11:00:38.908747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.231 [2024-07-10 11:00:38.908760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.231 [2024-07-10 11:00:38.908790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-07-10 11:00:38.918642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.231 [2024-07-10 11:00:38.918770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.231 [2024-07-10 11:00:38.918796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.231 [2024-07-10 11:00:38.918811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.231 [2024-07-10 11:00:38.918825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.231 [2024-07-10 11:00:38.918854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-07-10 11:00:38.928631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.231 [2024-07-10 11:00:38.928757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.231 [2024-07-10 11:00:38.928782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.231 [2024-07-10 11:00:38.928797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.231 [2024-07-10 11:00:38.928812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.231 [2024-07-10 11:00:38.928840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-07-10 11:00:38.938696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.231 [2024-07-10 11:00:38.938827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.231 [2024-07-10 11:00:38.938852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.231 [2024-07-10 11:00:38.938867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.231 [2024-07-10 11:00:38.938886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.231 [2024-07-10 11:00:38.938929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-07-10 11:00:38.948714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.231 [2024-07-10 11:00:38.948848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.231 [2024-07-10 11:00:38.948874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.231 [2024-07-10 11:00:38.948888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.231 [2024-07-10 11:00:38.948902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.231 [2024-07-10 11:00:38.948931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-07-10 11:00:38.958734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.231 [2024-07-10 11:00:38.958907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.231 [2024-07-10 11:00:38.958932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.231 [2024-07-10 11:00:38.958947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.231 [2024-07-10 11:00:38.958960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.231 [2024-07-10 11:00:38.958990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-07-10 11:00:38.968761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.231 [2024-07-10 11:00:38.968887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.231 [2024-07-10 11:00:38.968913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.231 [2024-07-10 11:00:38.968928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.231 [2024-07-10 11:00:38.968942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.231 [2024-07-10 11:00:38.968971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-07-10 11:00:38.978790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.231 [2024-07-10 11:00:38.978916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.231 [2024-07-10 11:00:38.978942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.231 [2024-07-10 11:00:38.978957] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.231 [2024-07-10 11:00:38.978972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.231 [2024-07-10 11:00:38.979001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-07-10 11:00:38.988818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.231 [2024-07-10 11:00:38.988953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.231 [2024-07-10 11:00:38.988979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.231 [2024-07-10 11:00:38.988994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.231 [2024-07-10 11:00:38.989009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.231 [2024-07-10 11:00:38.989038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-07-10 11:00:38.998896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.231 [2024-07-10 11:00:38.999032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.231 [2024-07-10 11:00:38.999059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.231 [2024-07-10 11:00:38.999078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.231 [2024-07-10 11:00:38.999093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.231 [2024-07-10 11:00:38.999122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.231 qpair failed and we were unable to recover it. 00:30:22.231 [2024-07-10 11:00:39.008941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.231 [2024-07-10 11:00:39.009071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.231 [2024-07-10 11:00:39.009098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.232 [2024-07-10 11:00:39.009113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.232 [2024-07-10 11:00:39.009126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.232 [2024-07-10 11:00:39.009156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-07-10 11:00:39.018903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.232 [2024-07-10 11:00:39.019064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.232 [2024-07-10 11:00:39.019089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.232 [2024-07-10 11:00:39.019104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.232 [2024-07-10 11:00:39.019134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.232 [2024-07-10 11:00:39.019162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-07-10 11:00:39.028964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.232 [2024-07-10 11:00:39.029096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.232 [2024-07-10 11:00:39.029122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.232 [2024-07-10 11:00:39.029137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.232 [2024-07-10 11:00:39.029156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.232 [2024-07-10 11:00:39.029186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-07-10 11:00:39.039017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.232 [2024-07-10 11:00:39.039152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.232 [2024-07-10 11:00:39.039177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.232 [2024-07-10 11:00:39.039193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.232 [2024-07-10 11:00:39.039206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.232 [2024-07-10 11:00:39.039235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.232 [2024-07-10 11:00:39.048979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.232 [2024-07-10 11:00:39.049114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.232 [2024-07-10 11:00:39.049140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.232 [2024-07-10 11:00:39.049154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.232 [2024-07-10 11:00:39.049169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.232 [2024-07-10 11:00:39.049198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.232 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.059043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.059180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.059208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.059224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.059239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.059269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.069043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.069203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.069230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.069246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.069262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.069291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.079090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.079227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.079254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.079270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.079284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.079313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.089119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.089249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.089275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.089291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.089304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.089334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.099146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.099284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.099311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.099330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.099345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.099390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.109172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.109301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.109327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.109343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.109357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.109387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.119295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.119441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.119467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.119482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.119502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.119532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.129224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.129351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.129376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.129391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.129404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.129441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.139246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.139366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.139392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.139407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.139422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.139465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.149352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.149488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.149514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.149529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.149543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.149574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.159343] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.159542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.159568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.159583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.159596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.159626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.169351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.169537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.169563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.169579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.169592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.169622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.179373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.179523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.179549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.179564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.179578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.179607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.189400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.189538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.189564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.189578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.189592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.189621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.199473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.199614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.199639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.199653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.199667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.199696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.209485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.209617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.209643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.209663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.209679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.209708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.219484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.490 [2024-07-10 11:00:39.219649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.490 [2024-07-10 11:00:39.219675] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.490 [2024-07-10 11:00:39.219690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.490 [2024-07-10 11:00:39.219714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.490 [2024-07-10 11:00:39.219743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.490 qpair failed and we were unable to recover it. 00:30:22.490 [2024-07-10 11:00:39.229505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.491 [2024-07-10 11:00:39.229644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.491 [2024-07-10 11:00:39.229670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.491 [2024-07-10 11:00:39.229685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.491 [2024-07-10 11:00:39.229699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.491 [2024-07-10 11:00:39.229733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.491 qpair failed and we were unable to recover it. 00:30:22.491 [2024-07-10 11:00:39.239538] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.491 [2024-07-10 11:00:39.239677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.491 [2024-07-10 11:00:39.239703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.491 [2024-07-10 11:00:39.239720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.491 [2024-07-10 11:00:39.239733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.491 [2024-07-10 11:00:39.239762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.491 qpair failed and we were unable to recover it. 00:30:22.491 [2024-07-10 11:00:39.249574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.491 [2024-07-10 11:00:39.249708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.491 [2024-07-10 11:00:39.249743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.491 [2024-07-10 11:00:39.249758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.491 [2024-07-10 11:00:39.249772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.491 [2024-07-10 11:00:39.249800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.491 qpair failed and we were unable to recover it. 00:30:22.491 [2024-07-10 11:00:39.259613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.491 [2024-07-10 11:00:39.259750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.491 [2024-07-10 11:00:39.259775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.491 [2024-07-10 11:00:39.259790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.491 [2024-07-10 11:00:39.259804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.491 [2024-07-10 11:00:39.259832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.491 qpair failed and we were unable to recover it. 00:30:22.491 [2024-07-10 11:00:39.269653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.491 [2024-07-10 11:00:39.269785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.491 [2024-07-10 11:00:39.269811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.491 [2024-07-10 11:00:39.269826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.491 [2024-07-10 11:00:39.269840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.491 [2024-07-10 11:00:39.269869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.491 qpair failed and we were unable to recover it. 00:30:22.491 [2024-07-10 11:00:39.279673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.491 [2024-07-10 11:00:39.279812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.491 [2024-07-10 11:00:39.279838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.491 [2024-07-10 11:00:39.279853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.491 [2024-07-10 11:00:39.279865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.491 [2024-07-10 11:00:39.279894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.491 qpair failed and we were unable to recover it. 00:30:22.491 [2024-07-10 11:00:39.289695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.491 [2024-07-10 11:00:39.289833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.491 [2024-07-10 11:00:39.289858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.491 [2024-07-10 11:00:39.289873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.491 [2024-07-10 11:00:39.289887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.491 [2024-07-10 11:00:39.289915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.491 qpair failed and we were unable to recover it. 00:30:22.491 [2024-07-10 11:00:39.299749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.491 [2024-07-10 11:00:39.299879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.491 [2024-07-10 11:00:39.299904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.491 [2024-07-10 11:00:39.299926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.491 [2024-07-10 11:00:39.299941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.491 [2024-07-10 11:00:39.299986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.491 qpair failed and we were unable to recover it. 00:30:22.491 [2024-07-10 11:00:39.309769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.491 [2024-07-10 11:00:39.309962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.491 [2024-07-10 11:00:39.310005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.491 [2024-07-10 11:00:39.310020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.491 [2024-07-10 11:00:39.310034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.491 [2024-07-10 11:00:39.310078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.491 qpair failed and we were unable to recover it. 00:30:22.749 [2024-07-10 11:00:39.319785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.749 [2024-07-10 11:00:39.319921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.749 [2024-07-10 11:00:39.319948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.749 [2024-07-10 11:00:39.319964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.749 [2024-07-10 11:00:39.319978] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.749 [2024-07-10 11:00:39.320007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.749 qpair failed and we were unable to recover it. 00:30:22.749 [2024-07-10 11:00:39.329810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.749 [2024-07-10 11:00:39.329947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.749 [2024-07-10 11:00:39.329974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.749 [2024-07-10 11:00:39.329989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.749 [2024-07-10 11:00:39.330003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.749 [2024-07-10 11:00:39.330032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.749 qpair failed and we were unable to recover it. 00:30:22.749 [2024-07-10 11:00:39.339881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.749 [2024-07-10 11:00:39.340045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.749 [2024-07-10 11:00:39.340070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.749 [2024-07-10 11:00:39.340084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.749 [2024-07-10 11:00:39.340113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.749 [2024-07-10 11:00:39.340140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.749 qpair failed and we were unable to recover it. 00:30:22.749 [2024-07-10 11:00:39.349861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.749 [2024-07-10 11:00:39.349995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.749 [2024-07-10 11:00:39.350021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.749 [2024-07-10 11:00:39.350036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.749 [2024-07-10 11:00:39.350050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.749 [2024-07-10 11:00:39.350078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.749 qpair failed and we were unable to recover it. 00:30:22.749 [2024-07-10 11:00:39.359942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.749 [2024-07-10 11:00:39.360114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.749 [2024-07-10 11:00:39.360140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.749 [2024-07-10 11:00:39.360155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.749 [2024-07-10 11:00:39.360169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.749 [2024-07-10 11:00:39.360198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.749 qpair failed and we were unable to recover it. 00:30:22.749 [2024-07-10 11:00:39.369980] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.749 [2024-07-10 11:00:39.370131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.749 [2024-07-10 11:00:39.370156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.749 [2024-07-10 11:00:39.370171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.749 [2024-07-10 11:00:39.370184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.749 [2024-07-10 11:00:39.370213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.749 qpair failed and we were unable to recover it. 00:30:22.749 [2024-07-10 11:00:39.379945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.749 [2024-07-10 11:00:39.380078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.749 [2024-07-10 11:00:39.380104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.749 [2024-07-10 11:00:39.380119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.749 [2024-07-10 11:00:39.380133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.749 [2024-07-10 11:00:39.380161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.749 qpair failed and we were unable to recover it. 00:30:22.749 [2024-07-10 11:00:39.390023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.749 [2024-07-10 11:00:39.390179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.749 [2024-07-10 11:00:39.390204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.749 [2024-07-10 11:00:39.390226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.749 [2024-07-10 11:00:39.390241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.749 [2024-07-10 11:00:39.390270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.750 qpair failed and we were unable to recover it. 00:30:22.750 [2024-07-10 11:00:39.400045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.750 [2024-07-10 11:00:39.400182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.750 [2024-07-10 11:00:39.400208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.750 [2024-07-10 11:00:39.400227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.750 [2024-07-10 11:00:39.400242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.750 [2024-07-10 11:00:39.400271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.750 qpair failed and we were unable to recover it. 00:30:22.750 [2024-07-10 11:00:39.410079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.750 [2024-07-10 11:00:39.410262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.750 [2024-07-10 11:00:39.410288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.750 [2024-07-10 11:00:39.410303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.750 [2024-07-10 11:00:39.410317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.750 [2024-07-10 11:00:39.410346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.750 qpair failed and we were unable to recover it. 00:30:22.750 [2024-07-10 11:00:39.420078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.750 [2024-07-10 11:00:39.420208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.750 [2024-07-10 11:00:39.420234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.750 [2024-07-10 11:00:39.420249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.750 [2024-07-10 11:00:39.420263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.750 [2024-07-10 11:00:39.420292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.750 qpair failed and we were unable to recover it. 00:30:22.750 [2024-07-10 11:00:39.430118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.750 [2024-07-10 11:00:39.430247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.750 [2024-07-10 11:00:39.430273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.750 [2024-07-10 11:00:39.430288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.750 [2024-07-10 11:00:39.430302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.750 [2024-07-10 11:00:39.430331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.750 qpair failed and we were unable to recover it. 00:30:22.750 [2024-07-10 11:00:39.440149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.750 [2024-07-10 11:00:39.440330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.750 [2024-07-10 11:00:39.440356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.750 [2024-07-10 11:00:39.440371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.750 [2024-07-10 11:00:39.440385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23ae350 00:30:22.750 [2024-07-10 11:00:39.440414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.750 qpair failed and we were unable to recover it. 00:30:22.750 [2024-07-10 11:00:39.450184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.750 [2024-07-10 11:00:39.450319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.750 [2024-07-10 11:00:39.450355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.750 [2024-07-10 11:00:39.450374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.750 [2024-07-10 11:00:39.450388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2ea8000b90 00:30:22.750 [2024-07-10 11:00:39.450435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.750 qpair failed and we were unable to recover it. 00:30:22.750 [2024-07-10 11:00:39.460244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.750 [2024-07-10 11:00:39.460380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.750 [2024-07-10 11:00:39.460421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.750 [2024-07-10 11:00:39.460454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.750 [2024-07-10 11:00:39.460469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2ea8000b90 00:30:22.750 [2024-07-10 11:00:39.460500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.750 qpair failed and we were unable to recover it. 00:30:22.750 [2024-07-10 11:00:39.470263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.750 [2024-07-10 11:00:39.470389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.750 [2024-07-10 11:00:39.470441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.750 [2024-07-10 11:00:39.470462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.750 [2024-07-10 11:00:39.470477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e98000b90 00:30:22.750 [2024-07-10 11:00:39.470510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.750 qpair failed and we were unable to recover it. 00:30:22.750 [2024-07-10 11:00:39.480318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.750 [2024-07-10 11:00:39.480498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.750 [2024-07-10 11:00:39.480525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.750 [2024-07-10 11:00:39.480546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.750 [2024-07-10 11:00:39.480560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2e98000b90 00:30:22.750 [2024-07-10 11:00:39.480593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.750 qpair failed and we were unable to recover it. 00:30:22.750 [2024-07-10 11:00:39.480732] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:22.750 A controller has encountered a failure and is being reset. 00:30:22.750 [2024-07-10 11:00:39.480800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bbdc0 (9): Bad file descriptor 00:30:22.750 Controller properly reset. 00:30:22.750 Initializing NVMe Controllers 00:30:22.750 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:22.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:22.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:22.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:22.750 Initialization complete. Launching workers. 00:30:22.750 Starting thread on core 1 00:30:22.750 Starting thread on core 2 00:30:22.750 Starting thread on core 3 00:30:22.750 Starting thread on core 0 00:30:22.750 11:00:39 -- host/target_disconnect.sh@59 -- # sync 00:30:22.750 00:30:22.750 real 0m11.366s 00:30:22.750 user 0m21.172s 00:30:22.750 sys 0m5.390s 00:30:22.750 11:00:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:22.750 11:00:39 -- common/autotest_common.sh@10 -- # set +x 00:30:22.750 ************************************ 00:30:22.750 END TEST nvmf_target_disconnect_tc2 00:30:22.750 ************************************ 00:30:22.750 11:00:39 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:22.750 11:00:39 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:22.750 11:00:39 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:22.750 11:00:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:22.750 11:00:39 -- nvmf/common.sh@116 -- # sync 00:30:22.750 11:00:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:22.750 11:00:39 -- nvmf/common.sh@119 -- # set +e 00:30:22.750 11:00:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:22.750 11:00:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:22.750 rmmod nvme_tcp 00:30:22.750 rmmod nvme_fabrics 00:30:22.750 rmmod nvme_keyring 00:30:23.007 11:00:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:23.007 11:00:39 -- nvmf/common.sh@123 -- # set -e 00:30:23.007 11:00:39 -- nvmf/common.sh@124 -- # return 0 00:30:23.007 11:00:39 -- nvmf/common.sh@477 -- # '[' -n 3587821 ']' 00:30:23.007 11:00:39 -- nvmf/common.sh@478 -- # killprocess 3587821 00:30:23.007 11:00:39 -- common/autotest_common.sh@926 -- # '[' -z 3587821 ']' 00:30:23.007 11:00:39 -- common/autotest_common.sh@930 -- # kill -0 3587821 00:30:23.007 11:00:39 -- common/autotest_common.sh@931 -- # uname 00:30:23.007 11:00:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:23.007 11:00:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3587821 00:30:23.007 11:00:39 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:30:23.007 11:00:39 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:30:23.007 11:00:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3587821' 00:30:23.007 killing process with pid 3587821 00:30:23.007 11:00:39 -- common/autotest_common.sh@945 -- # kill 3587821 00:30:23.007 11:00:39 -- common/autotest_common.sh@950 -- # wait 3587821 00:30:23.265 11:00:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:23.265 11:00:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:23.265 11:00:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:23.265 11:00:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:23.265 11:00:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:23.265 11:00:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.265 11:00:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:23.265 11:00:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.166 11:00:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:25.166 00:30:25.166 real 0m16.040s 00:30:25.166 user 0m46.815s 00:30:25.166 sys 0m7.338s 00:30:25.166 11:00:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:25.166 11:00:41 -- common/autotest_common.sh@10 -- # set +x 00:30:25.166 ************************************ 00:30:25.166 END TEST nvmf_target_disconnect 00:30:25.166 ************************************ 00:30:25.166 11:00:41 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:30:25.166 11:00:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:25.166 11:00:41 -- common/autotest_common.sh@10 -- # set +x 00:30:25.166 11:00:41 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:25.166 00:30:25.166 real 22m25.412s 00:30:25.166 user 64m29.390s 00:30:25.166 sys 5m36.546s 00:30:25.166 11:00:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:25.166 11:00:41 -- common/autotest_common.sh@10 -- # set +x 00:30:25.166 ************************************ 00:30:25.166 END TEST nvmf_tcp 00:30:25.166 ************************************ 00:30:25.166 11:00:41 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:30:25.166 11:00:41 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:25.166 11:00:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:25.166 11:00:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:25.166 11:00:41 -- common/autotest_common.sh@10 -- # set +x 00:30:25.166 ************************************ 00:30:25.166 START TEST spdkcli_nvmf_tcp 00:30:25.166 ************************************ 00:30:25.166 11:00:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:25.425 * Looking for test storage... 00:30:25.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:25.425 11:00:42 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:25.425 11:00:42 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:25.425 11:00:42 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:25.425 11:00:42 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.425 11:00:42 -- nvmf/common.sh@7 -- # uname -s 00:30:25.425 11:00:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.425 11:00:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.425 11:00:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.425 11:00:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.425 11:00:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.425 11:00:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.425 11:00:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.425 11:00:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.425 11:00:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.425 11:00:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.425 11:00:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:25.425 11:00:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:25.425 11:00:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.425 11:00:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.425 11:00:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.425 11:00:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.425 11:00:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.425 11:00:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.425 11:00:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.425 11:00:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.425 11:00:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.425 11:00:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.425 11:00:42 -- paths/export.sh@5 -- # export PATH 00:30:25.425 11:00:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.425 11:00:42 -- nvmf/common.sh@46 -- # : 0 00:30:25.425 11:00:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:25.425 11:00:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:25.425 11:00:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:25.425 11:00:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.425 11:00:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.425 11:00:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:25.425 11:00:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:25.425 11:00:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:25.425 11:00:42 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:25.425 11:00:42 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:25.425 11:00:42 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:25.425 11:00:42 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:25.425 11:00:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:25.425 11:00:42 -- common/autotest_common.sh@10 -- # set +x 00:30:25.425 11:00:42 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:25.425 11:00:42 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3589038 00:30:25.425 11:00:42 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:25.425 11:00:42 -- spdkcli/common.sh@34 -- # waitforlisten 3589038 00:30:25.425 11:00:42 -- common/autotest_common.sh@819 -- # '[' -z 3589038 ']' 00:30:25.425 11:00:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.425 11:00:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:25.425 11:00:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.425 11:00:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:25.425 11:00:42 -- common/autotest_common.sh@10 -- # set +x 00:30:25.425 [2024-07-10 11:00:42.071614] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:30:25.425 [2024-07-10 11:00:42.071693] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589038 ] 00:30:25.425 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.425 [2024-07-10 11:00:42.128074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:25.425 [2024-07-10 11:00:42.212004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:25.425 [2024-07-10 11:00:42.212240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.425 [2024-07-10 11:00:42.212245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.357 11:00:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:26.357 11:00:43 -- common/autotest_common.sh@852 -- # return 0 00:30:26.357 11:00:43 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:26.357 11:00:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:26.357 11:00:43 -- common/autotest_common.sh@10 -- # set +x 00:30:26.357 11:00:43 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:26.357 11:00:43 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:26.357 11:00:43 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:26.357 11:00:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:26.357 11:00:43 -- common/autotest_common.sh@10 -- # set +x 00:30:26.357 11:00:43 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:26.357 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:26.357 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:26.357 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:26.357 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:26.357 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:26.357 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:26.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:26.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:26.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:26.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:26.357 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:26.357 ' 00:30:26.920 [2024-07-10 11:00:43.464207] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:28.819 [2024-07-10 11:00:45.624596] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.191 [2024-07-10 11:00:46.865088] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:32.773 [2024-07-10 11:00:49.152520] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:34.676 [2024-07-10 11:00:51.131014] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:36.045 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:36.045 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:36.045 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:36.045 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:36.045 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:36.045 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:36.045 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:36.045 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:36.045 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:36.045 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:36.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:36.045 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:36.045 11:00:52 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:36.045 11:00:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:36.045 11:00:52 -- common/autotest_common.sh@10 -- # set +x 00:30:36.045 11:00:52 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:36.045 11:00:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:36.045 11:00:52 -- common/autotest_common.sh@10 -- # set +x 00:30:36.045 11:00:52 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:36.045 11:00:52 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:36.609 11:00:53 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:36.609 11:00:53 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:36.609 11:00:53 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:36.609 11:00:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:36.609 11:00:53 -- common/autotest_common.sh@10 -- # set +x 00:30:36.609 11:00:53 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:36.609 11:00:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:36.609 11:00:53 -- common/autotest_common.sh@10 -- # set +x 00:30:36.609 11:00:53 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:36.609 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:36.609 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:36.609 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:36.609 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:36.609 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:36.609 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:36.609 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:36.609 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:36.609 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:36.609 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:36.609 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:36.609 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:36.609 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:36.609 ' 00:30:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:41.872 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:41.872 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:41.872 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:41.872 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:41.872 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:41.872 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:41.872 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:41.872 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:41.872 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:41.872 11:00:58 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:41.872 11:00:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:41.872 11:00:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.872 11:00:58 -- spdkcli/nvmf.sh@90 -- # killprocess 3589038 00:30:41.872 11:00:58 -- common/autotest_common.sh@926 -- # '[' -z 3589038 ']' 00:30:41.872 11:00:58 -- common/autotest_common.sh@930 -- # kill -0 3589038 00:30:41.872 11:00:58 -- common/autotest_common.sh@931 -- # uname 00:30:41.872 11:00:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:41.872 11:00:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3589038 00:30:41.872 11:00:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:41.872 11:00:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:41.872 11:00:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3589038' 00:30:41.872 killing process with pid 3589038 00:30:41.872 11:00:58 -- common/autotest_common.sh@945 -- # kill 3589038 00:30:41.872 [2024-07-10 11:00:58.563383] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:41.872 11:00:58 -- common/autotest_common.sh@950 -- # wait 3589038 00:30:42.131 11:00:58 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:42.131 11:00:58 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:42.131 11:00:58 -- spdkcli/common.sh@13 -- # '[' -n 3589038 ']' 00:30:42.131 11:00:58 -- spdkcli/common.sh@14 -- # killprocess 3589038 00:30:42.131 11:00:58 -- common/autotest_common.sh@926 -- # '[' -z 3589038 ']' 00:30:42.131 11:00:58 -- common/autotest_common.sh@930 -- # kill -0 3589038 00:30:42.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3589038) - No such process 00:30:42.131 11:00:58 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3589038 is not found' 00:30:42.131 Process with pid 3589038 is not found 00:30:42.131 11:00:58 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:42.131 11:00:58 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:42.131 11:00:58 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:42.131 00:30:42.131 real 0m16.823s 00:30:42.131 user 0m35.786s 00:30:42.131 sys 0m0.855s 00:30:42.131 11:00:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.131 11:00:58 -- common/autotest_common.sh@10 -- # set +x 00:30:42.131 ************************************ 00:30:42.131 END TEST spdkcli_nvmf_tcp 00:30:42.131 ************************************ 00:30:42.131 11:00:58 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:42.131 11:00:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:42.131 11:00:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:42.131 11:00:58 -- common/autotest_common.sh@10 -- # set +x 00:30:42.131 ************************************ 00:30:42.131 START TEST nvmf_identify_passthru 00:30:42.131 ************************************ 00:30:42.131 11:00:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:42.131 * Looking for test storage... 00:30:42.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:42.131 11:00:58 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.131 11:00:58 -- nvmf/common.sh@7 -- # uname -s 00:30:42.131 11:00:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.131 11:00:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.131 11:00:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.131 11:00:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.131 11:00:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.131 11:00:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.131 11:00:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.131 11:00:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.131 11:00:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.131 11:00:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.131 11:00:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.132 11:00:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.132 11:00:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.132 11:00:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.132 11:00:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.132 11:00:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.132 11:00:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.132 11:00:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.132 11:00:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.132 11:00:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.132 11:00:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.132 11:00:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.132 11:00:58 -- paths/export.sh@5 -- # export PATH 00:30:42.132 11:00:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.132 11:00:58 -- nvmf/common.sh@46 -- # : 0 00:30:42.132 11:00:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:42.132 11:00:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:42.132 11:00:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:42.132 11:00:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.132 11:00:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.132 11:00:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:42.132 11:00:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:42.132 11:00:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:42.132 11:00:58 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.132 11:00:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.132 11:00:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.132 11:00:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.132 11:00:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.132 11:00:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.132 11:00:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.132 11:00:58 -- paths/export.sh@5 -- # export PATH 00:30:42.132 11:00:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.132 11:00:58 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:42.132 11:00:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:42.132 11:00:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.132 11:00:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:42.132 11:00:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:42.132 11:00:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:42.132 11:00:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.132 11:00:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:42.132 11:00:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.132 11:00:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:42.132 11:00:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:42.132 11:00:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:42.132 11:00:58 -- common/autotest_common.sh@10 -- # set +x 00:30:44.043 11:01:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:44.043 11:01:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:44.043 11:01:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:44.043 11:01:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:44.043 11:01:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:44.043 11:01:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:44.043 11:01:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:44.043 11:01:00 -- nvmf/common.sh@294 -- # net_devs=() 00:30:44.043 11:01:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:44.043 11:01:00 -- nvmf/common.sh@295 -- # e810=() 00:30:44.043 11:01:00 -- nvmf/common.sh@295 -- # local -ga e810 00:30:44.043 11:01:00 -- nvmf/common.sh@296 -- # x722=() 00:30:44.043 11:01:00 -- nvmf/common.sh@296 -- # local -ga x722 00:30:44.043 11:01:00 -- nvmf/common.sh@297 -- # mlx=() 00:30:44.043 11:01:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:44.043 11:01:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.043 11:01:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.043 11:01:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.043 11:01:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.043 11:01:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.043 11:01:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.043 11:01:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.043 11:01:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.043 11:01:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.043 11:01:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.043 11:01:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.043 11:01:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:44.043 11:01:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:44.043 11:01:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:44.043 11:01:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:44.043 11:01:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:44.043 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:44.043 11:01:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:44.043 11:01:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:44.043 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:44.043 11:01:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:44.043 11:01:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:44.043 11:01:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.043 11:01:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:44.043 11:01:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.043 11:01:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:44.043 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:44.043 11:01:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.043 11:01:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:44.043 11:01:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.043 11:01:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:44.043 11:01:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.043 11:01:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:44.043 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:44.043 11:01:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.043 11:01:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:44.043 11:01:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:44.043 11:01:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:44.043 11:01:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:44.043 11:01:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.043 11:01:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.043 11:01:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.043 11:01:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:44.043 11:01:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.043 11:01:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.043 11:01:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:44.043 11:01:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.043 11:01:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.043 11:01:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:44.043 11:01:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:44.043 11:01:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.043 11:01:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.043 11:01:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.043 11:01:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.043 11:01:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:44.043 11:01:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.302 11:01:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.302 11:01:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.302 11:01:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:44.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:30:44.302 00:30:44.302 --- 10.0.0.2 ping statistics --- 00:30:44.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.302 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:30:44.302 11:01:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:30:44.302 00:30:44.302 --- 10.0.0.1 ping statistics --- 00:30:44.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.302 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:30:44.302 11:01:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.302 11:01:00 -- nvmf/common.sh@410 -- # return 0 00:30:44.302 11:01:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:44.302 11:01:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.302 11:01:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:44.302 11:01:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:44.302 11:01:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.302 11:01:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:44.302 11:01:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:44.302 11:01:00 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:44.302 11:01:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:44.302 11:01:00 -- common/autotest_common.sh@10 -- # set +x 00:30:44.302 11:01:00 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:44.302 11:01:00 -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:44.302 11:01:00 -- common/autotest_common.sh@1509 -- # local bdfs 00:30:44.302 11:01:00 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:44.302 11:01:00 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:44.302 11:01:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:44.302 11:01:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:44.302 11:01:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:44.302 11:01:00 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:44.302 11:01:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:44.302 11:01:00 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:44.302 11:01:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:30:44.302 11:01:00 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:30:44.302 11:01:00 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:30:44.302 11:01:00 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:30:44.302 11:01:00 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:44.302 11:01:00 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:44.302 11:01:00 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:44.302 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.486 11:01:05 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:30:48.486 11:01:05 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:48.487 11:01:05 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:48.487 11:01:05 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:48.487 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.667 11:01:09 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:52.667 11:01:09 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:52.667 11:01:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:52.667 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:30:52.667 11:01:09 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:52.667 11:01:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:52.667 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:30:52.667 11:01:09 -- target/identify_passthru.sh@31 -- # nvmfpid=3593761 00:30:52.667 11:01:09 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:52.667 11:01:09 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:52.667 11:01:09 -- target/identify_passthru.sh@35 -- # waitforlisten 3593761 00:30:52.667 11:01:09 -- common/autotest_common.sh@819 -- # '[' -z 3593761 ']' 00:30:52.667 11:01:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.667 11:01:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:52.667 11:01:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.667 11:01:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:52.667 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:30:52.667 [2024-07-10 11:01:09.427375] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:30:52.667 [2024-07-10 11:01:09.427482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.667 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.667 [2024-07-10 11:01:09.491987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:52.925 [2024-07-10 11:01:09.575727] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:52.925 [2024-07-10 11:01:09.575867] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.925 [2024-07-10 11:01:09.575883] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.925 [2024-07-10 11:01:09.575895] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.925 [2024-07-10 11:01:09.575950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.925 [2024-07-10 11:01:09.576006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:52.925 [2024-07-10 11:01:09.576071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:52.925 [2024-07-10 11:01:09.576073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.925 11:01:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:52.925 11:01:09 -- common/autotest_common.sh@852 -- # return 0 00:30:52.925 11:01:09 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:52.925 11:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.925 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:30:52.925 INFO: Log level set to 20 00:30:52.925 INFO: Requests: 00:30:52.925 { 00:30:52.925 "jsonrpc": "2.0", 00:30:52.925 "method": "nvmf_set_config", 00:30:52.925 "id": 1, 00:30:52.925 "params": { 00:30:52.925 "admin_cmd_passthru": { 00:30:52.925 "identify_ctrlr": true 00:30:52.925 } 00:30:52.925 } 00:30:52.925 } 00:30:52.925 00:30:52.925 INFO: response: 00:30:52.925 { 00:30:52.926 "jsonrpc": "2.0", 00:30:52.926 "id": 1, 00:30:52.926 "result": true 00:30:52.926 } 00:30:52.926 00:30:52.926 11:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.926 11:01:09 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:52.926 11:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.926 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:30:52.926 INFO: Setting log level to 20 00:30:52.926 INFO: Setting log level to 20 00:30:52.926 INFO: Log level set to 20 00:30:52.926 INFO: Log level set to 20 00:30:52.926 INFO: Requests: 00:30:52.926 { 00:30:52.926 "jsonrpc": "2.0", 00:30:52.926 "method": "framework_start_init", 00:30:52.926 "id": 1 00:30:52.926 } 00:30:52.926 00:30:52.926 INFO: Requests: 00:30:52.926 { 00:30:52.926 "jsonrpc": "2.0", 00:30:52.926 "method": "framework_start_init", 00:30:52.926 "id": 1 00:30:52.926 } 00:30:52.926 00:30:52.926 [2024-07-10 11:01:09.734614] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:52.926 INFO: response: 00:30:52.926 { 00:30:52.926 "jsonrpc": "2.0", 00:30:52.926 "id": 1, 00:30:52.926 "result": true 00:30:52.926 } 00:30:52.926 00:30:52.926 INFO: response: 00:30:52.926 { 00:30:52.926 "jsonrpc": "2.0", 00:30:52.926 "id": 1, 00:30:52.926 "result": true 00:30:52.926 } 00:30:52.926 00:30:52.926 11:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.926 11:01:09 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:52.926 11:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.926 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:30:52.926 INFO: Setting log level to 40 00:30:52.926 INFO: Setting log level to 40 00:30:52.926 INFO: Setting log level to 40 00:30:52.926 [2024-07-10 11:01:09.744552] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.183 11:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.183 11:01:09 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:53.183 11:01:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:53.183 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:30:53.183 11:01:09 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:30:53.183 11:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.183 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:30:56.458 Nvme0n1 00:30:56.458 11:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.458 11:01:12 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:56.458 11:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.458 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:30:56.458 11:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.458 11:01:12 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:56.458 11:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.458 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:30:56.458 11:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.458 11:01:12 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.458 11:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.458 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:30:56.458 [2024-07-10 11:01:12.632761] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.458 11:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.458 11:01:12 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:56.458 11:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.458 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:30:56.458 [2024-07-10 11:01:12.640493] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:56.458 [ 00:30:56.458 { 00:30:56.458 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:56.458 "subtype": "Discovery", 00:30:56.458 "listen_addresses": [], 00:30:56.458 "allow_any_host": true, 00:30:56.458 "hosts": [] 00:30:56.458 }, 00:30:56.458 { 00:30:56.458 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:56.458 "subtype": "NVMe", 00:30:56.458 "listen_addresses": [ 00:30:56.458 { 00:30:56.458 "transport": "TCP", 00:30:56.458 "trtype": "TCP", 00:30:56.458 "adrfam": "IPv4", 00:30:56.458 "traddr": "10.0.0.2", 00:30:56.458 "trsvcid": "4420" 00:30:56.458 } 00:30:56.458 ], 00:30:56.458 "allow_any_host": true, 00:30:56.458 "hosts": [], 00:30:56.458 "serial_number": "SPDK00000000000001", 00:30:56.458 "model_number": "SPDK bdev Controller", 00:30:56.458 "max_namespaces": 1, 00:30:56.458 "min_cntlid": 1, 00:30:56.458 "max_cntlid": 65519, 00:30:56.458 "namespaces": [ 00:30:56.458 { 00:30:56.458 "nsid": 1, 00:30:56.458 "bdev_name": "Nvme0n1", 00:30:56.458 "name": "Nvme0n1", 00:30:56.458 "nguid": "B3F9A917FBCF4D20949C5BC663777C7A", 00:30:56.458 "uuid": "b3f9a917-fbcf-4d20-949c-5bc663777c7a" 00:30:56.458 } 00:30:56.458 ] 00:30:56.458 } 00:30:56.458 ] 00:30:56.458 11:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.458 11:01:12 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:56.458 11:01:12 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:56.458 11:01:12 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:56.458 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.458 11:01:12 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:30:56.458 11:01:12 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:56.458 11:01:12 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:56.458 11:01:12 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:56.458 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.458 11:01:12 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:56.458 11:01:12 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:30:56.458 11:01:12 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:56.458 11:01:12 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:56.458 11:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.459 11:01:12 -- common/autotest_common.sh@10 -- # set +x 00:30:56.459 11:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.459 11:01:12 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:56.459 11:01:12 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:56.459 11:01:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:56.459 11:01:12 -- nvmf/common.sh@116 -- # sync 00:30:56.459 11:01:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:56.459 11:01:12 -- nvmf/common.sh@119 -- # set +e 00:30:56.459 11:01:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:56.459 11:01:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:56.459 rmmod nvme_tcp 00:30:56.459 rmmod nvme_fabrics 00:30:56.459 rmmod nvme_keyring 00:30:56.459 11:01:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:56.459 11:01:12 -- nvmf/common.sh@123 -- # set -e 00:30:56.459 11:01:12 -- nvmf/common.sh@124 -- # return 0 00:30:56.459 11:01:12 -- nvmf/common.sh@477 -- # '[' -n 3593761 ']' 00:30:56.459 11:01:12 -- nvmf/common.sh@478 -- # killprocess 3593761 00:30:56.459 11:01:12 -- common/autotest_common.sh@926 -- # '[' -z 3593761 ']' 00:30:56.459 11:01:12 -- common/autotest_common.sh@930 -- # kill -0 3593761 00:30:56.459 11:01:12 -- common/autotest_common.sh@931 -- # uname 00:30:56.459 11:01:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:56.459 11:01:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3593761 00:30:56.459 11:01:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:56.459 11:01:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:56.459 11:01:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3593761' 00:30:56.459 killing process with pid 3593761 00:30:56.459 11:01:12 -- common/autotest_common.sh@945 -- # kill 3593761 00:30:56.459 [2024-07-10 11:01:12.960177] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:56.459 11:01:12 -- common/autotest_common.sh@950 -- # wait 3593761 00:30:57.830 11:01:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:57.830 11:01:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:57.830 11:01:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:57.830 11:01:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:57.830 11:01:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:57.830 11:01:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.830 11:01:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:57.830 11:01:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.731 11:01:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:59.731 00:30:59.731 real 0m17.737s 00:30:59.731 user 0m26.075s 00:30:59.731 sys 0m2.208s 00:30:59.731 11:01:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:59.731 11:01:16 -- common/autotest_common.sh@10 -- # set +x 00:30:59.731 ************************************ 00:30:59.731 END TEST nvmf_identify_passthru 00:30:59.731 ************************************ 00:30:59.990 11:01:16 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:59.990 11:01:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:59.990 11:01:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:59.990 11:01:16 -- common/autotest_common.sh@10 -- # set +x 00:30:59.990 ************************************ 00:30:59.990 START TEST nvmf_dif 00:30:59.990 ************************************ 00:30:59.990 11:01:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:59.990 * Looking for test storage... 00:30:59.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.990 11:01:16 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.990 11:01:16 -- nvmf/common.sh@7 -- # uname -s 00:30:59.990 11:01:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.990 11:01:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.990 11:01:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.990 11:01:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.990 11:01:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.990 11:01:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.990 11:01:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.990 11:01:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.990 11:01:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.990 11:01:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.990 11:01:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.990 11:01:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.990 11:01:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.990 11:01:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.990 11:01:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.990 11:01:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.990 11:01:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.990 11:01:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.990 11:01:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.990 11:01:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.990 11:01:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.990 11:01:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.990 11:01:16 -- paths/export.sh@5 -- # export PATH 00:30:59.990 11:01:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.990 11:01:16 -- nvmf/common.sh@46 -- # : 0 00:30:59.990 11:01:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:59.990 11:01:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:59.990 11:01:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:59.990 11:01:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.990 11:01:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.990 11:01:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:59.990 11:01:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:59.990 11:01:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:59.990 11:01:16 -- target/dif.sh@15 -- # NULL_META=16 00:30:59.990 11:01:16 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:59.990 11:01:16 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:59.990 11:01:16 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:59.990 11:01:16 -- target/dif.sh@135 -- # nvmftestinit 00:30:59.990 11:01:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:59.990 11:01:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.990 11:01:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:59.990 11:01:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:59.990 11:01:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:59.990 11:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.990 11:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:59.990 11:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.990 11:01:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:59.990 11:01:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:59.990 11:01:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:59.990 11:01:16 -- common/autotest_common.sh@10 -- # set +x 00:31:01.890 11:01:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:01.890 11:01:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:01.890 11:01:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:01.890 11:01:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:01.890 11:01:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:01.890 11:01:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:01.890 11:01:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:01.890 11:01:18 -- nvmf/common.sh@294 -- # net_devs=() 00:31:01.890 11:01:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:01.890 11:01:18 -- nvmf/common.sh@295 -- # e810=() 00:31:01.890 11:01:18 -- nvmf/common.sh@295 -- # local -ga e810 00:31:01.890 11:01:18 -- nvmf/common.sh@296 -- # x722=() 00:31:01.890 11:01:18 -- nvmf/common.sh@296 -- # local -ga x722 00:31:01.890 11:01:18 -- nvmf/common.sh@297 -- # mlx=() 00:31:01.890 11:01:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:01.890 11:01:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.890 11:01:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.890 11:01:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.890 11:01:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.890 11:01:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.890 11:01:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.890 11:01:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.890 11:01:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.890 11:01:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.890 11:01:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.890 11:01:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.890 11:01:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:01.890 11:01:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:01.890 11:01:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:01.890 11:01:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:01.890 11:01:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:01.890 11:01:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:01.890 11:01:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:01.890 11:01:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:01.890 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:01.890 11:01:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:01.890 11:01:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:01.890 11:01:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.890 11:01:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.891 11:01:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:01.891 11:01:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:01.891 11:01:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:01.891 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:01.891 11:01:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:01.891 11:01:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:01.891 11:01:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.891 11:01:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.891 11:01:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:01.891 11:01:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:01.891 11:01:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:01.891 11:01:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:01.891 11:01:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:01.891 11:01:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.891 11:01:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:01.891 11:01:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.891 11:01:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:01.891 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:01.891 11:01:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.891 11:01:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:01.891 11:01:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.891 11:01:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:01.891 11:01:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.891 11:01:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:01.891 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:01.891 11:01:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.891 11:01:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:01.891 11:01:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:01.891 11:01:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:01.891 11:01:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:01.891 11:01:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:01.891 11:01:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.891 11:01:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.891 11:01:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.891 11:01:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:01.891 11:01:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.891 11:01:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.891 11:01:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:01.891 11:01:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.891 11:01:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.891 11:01:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:01.891 11:01:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:01.891 11:01:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.891 11:01:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.891 11:01:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.891 11:01:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.891 11:01:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:01.891 11:01:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.891 11:01:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.891 11:01:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.891 11:01:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:01.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:31:01.891 00:31:01.891 --- 10.0.0.2 ping statistics --- 00:31:01.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.891 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:31:01.891 11:01:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:31:01.891 00:31:01.891 --- 10.0.0.1 ping statistics --- 00:31:01.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.891 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:31:01.891 11:01:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.891 11:01:18 -- nvmf/common.sh@410 -- # return 0 00:31:01.891 11:01:18 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:31:01.891 11:01:18 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:03.266 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:03.267 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:03.267 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:03.267 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:03.267 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:03.267 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:03.267 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:03.267 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:03.267 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:03.267 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:03.267 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:03.267 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:03.267 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:03.267 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:03.267 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:03.267 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:03.267 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:03.267 11:01:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:03.267 11:01:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:03.267 11:01:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:03.267 11:01:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:03.267 11:01:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:03.267 11:01:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:03.267 11:01:19 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:03.267 11:01:19 -- target/dif.sh@137 -- # nvmfappstart 00:31:03.267 11:01:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:03.267 11:01:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:03.267 11:01:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.267 11:01:19 -- nvmf/common.sh@469 -- # nvmfpid=3596956 00:31:03.267 11:01:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:03.267 11:01:19 -- nvmf/common.sh@470 -- # waitforlisten 3596956 00:31:03.267 11:01:19 -- common/autotest_common.sh@819 -- # '[' -z 3596956 ']' 00:31:03.267 11:01:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.267 11:01:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:03.267 11:01:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.267 11:01:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:03.267 11:01:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.267 [2024-07-10 11:01:19.995555] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:03.267 [2024-07-10 11:01:19.995649] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.267 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.267 [2024-07-10 11:01:20.073671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.525 [2024-07-10 11:01:20.166903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:03.525 [2024-07-10 11:01:20.167064] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.525 [2024-07-10 11:01:20.167085] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.525 [2024-07-10 11:01:20.167107] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.525 [2024-07-10 11:01:20.167140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.459 11:01:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:04.459 11:01:20 -- common/autotest_common.sh@852 -- # return 0 00:31:04.459 11:01:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:04.459 11:01:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:04.459 11:01:20 -- common/autotest_common.sh@10 -- # set +x 00:31:04.459 11:01:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.459 11:01:20 -- target/dif.sh@139 -- # create_transport 00:31:04.459 11:01:20 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:04.459 11:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:04.459 11:01:20 -- common/autotest_common.sh@10 -- # set +x 00:31:04.459 [2024-07-10 11:01:20.985002] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.459 11:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:04.459 11:01:20 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:04.459 11:01:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:04.459 11:01:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:04.459 11:01:20 -- common/autotest_common.sh@10 -- # set +x 00:31:04.459 ************************************ 00:31:04.459 START TEST fio_dif_1_default 00:31:04.459 ************************************ 00:31:04.459 11:01:20 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:31:04.459 11:01:20 -- target/dif.sh@86 -- # create_subsystems 0 00:31:04.459 11:01:20 -- target/dif.sh@28 -- # local sub 00:31:04.459 11:01:20 -- target/dif.sh@30 -- # for sub in "$@" 00:31:04.459 11:01:20 -- target/dif.sh@31 -- # create_subsystem 0 00:31:04.459 11:01:20 -- target/dif.sh@18 -- # local sub_id=0 00:31:04.459 11:01:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:04.459 11:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:04.459 11:01:20 -- common/autotest_common.sh@10 -- # set +x 00:31:04.459 bdev_null0 00:31:04.459 11:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:04.459 11:01:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:04.459 11:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:04.459 11:01:21 -- common/autotest_common.sh@10 -- # set +x 00:31:04.459 11:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:04.459 11:01:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:04.459 11:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:04.459 11:01:21 -- common/autotest_common.sh@10 -- # set +x 00:31:04.459 11:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:04.459 11:01:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:04.459 11:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:04.459 11:01:21 -- common/autotest_common.sh@10 -- # set +x 00:31:04.459 [2024-07-10 11:01:21.021232] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.459 11:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:04.459 11:01:21 -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:04.459 11:01:21 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:04.459 11:01:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:04.459 11:01:21 -- nvmf/common.sh@520 -- # config=() 00:31:04.459 11:01:21 -- nvmf/common.sh@520 -- # local subsystem config 00:31:04.459 11:01:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:04.459 11:01:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:04.459 11:01:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:04.459 { 00:31:04.459 "params": { 00:31:04.459 "name": "Nvme$subsystem", 00:31:04.459 "trtype": "$TEST_TRANSPORT", 00:31:04.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:04.459 "adrfam": "ipv4", 00:31:04.459 "trsvcid": "$NVMF_PORT", 00:31:04.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:04.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:04.459 "hdgst": ${hdgst:-false}, 00:31:04.459 "ddgst": ${ddgst:-false} 00:31:04.459 }, 00:31:04.459 "method": "bdev_nvme_attach_controller" 00:31:04.459 } 00:31:04.459 EOF 00:31:04.459 )") 00:31:04.459 11:01:21 -- target/dif.sh@82 -- # gen_fio_conf 00:31:04.459 11:01:21 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:04.459 11:01:21 -- target/dif.sh@54 -- # local file 00:31:04.459 11:01:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:04.459 11:01:21 -- target/dif.sh@56 -- # cat 00:31:04.459 11:01:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:04.459 11:01:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:04.459 11:01:21 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:04.459 11:01:21 -- common/autotest_common.sh@1320 -- # shift 00:31:04.459 11:01:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:04.459 11:01:21 -- nvmf/common.sh@542 -- # cat 00:31:04.459 11:01:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.459 11:01:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:04.459 11:01:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:04.459 11:01:21 -- target/dif.sh@72 -- # (( file <= files )) 00:31:04.459 11:01:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:04.459 11:01:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:04.459 11:01:21 -- nvmf/common.sh@544 -- # jq . 00:31:04.459 11:01:21 -- nvmf/common.sh@545 -- # IFS=, 00:31:04.459 11:01:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:04.459 "params": { 00:31:04.459 "name": "Nvme0", 00:31:04.459 "trtype": "tcp", 00:31:04.459 "traddr": "10.0.0.2", 00:31:04.459 "adrfam": "ipv4", 00:31:04.459 "trsvcid": "4420", 00:31:04.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:04.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:04.459 "hdgst": false, 00:31:04.459 "ddgst": false 00:31:04.459 }, 00:31:04.459 "method": "bdev_nvme_attach_controller" 00:31:04.459 }' 00:31:04.459 11:01:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:04.459 11:01:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:04.459 11:01:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.459 11:01:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:04.459 11:01:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:04.459 11:01:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:04.459 11:01:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:04.459 11:01:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:04.459 11:01:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:04.459 11:01:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:04.459 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:04.459 fio-3.35 00:31:04.459 Starting 1 thread 00:31:04.717 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.974 [2024-07-10 11:01:21.639458] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:04.974 [2024-07-10 11:01:21.639558] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:14.935 00:31:14.935 filename0: (groupid=0, jobs=1): err= 0: pid=3597319: Wed Jul 10 11:01:31 2024 00:31:14.935 read: IOPS=141, BW=566KiB/s (580kB/s)(5664KiB/10003msec) 00:31:14.935 slat (nsec): min=4611, max=56841, avg=9355.48, stdev=2864.83 00:31:14.935 clat (usec): min=830, max=47316, avg=28226.16, stdev=18722.42 00:31:14.935 lat (usec): min=838, max=47333, avg=28235.52, stdev=18722.39 00:31:14.935 clat percentiles (usec): 00:31:14.935 | 1.00th=[ 848], 5.00th=[ 865], 10.00th=[ 889], 20.00th=[ 914], 00:31:14.935 | 30.00th=[ 930], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:14.935 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:14.935 | 99.00th=[41157], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:31:14.935 | 99.99th=[47449] 00:31:14.935 bw ( KiB/s): min= 384, max= 768, per=98.90%, avg=560.84, stdev=183.01, samples=19 00:31:14.935 iops : min= 96, max= 192, avg=140.21, stdev=45.75, samples=19 00:31:14.935 lat (usec) : 1000=31.92% 00:31:14.935 lat (msec) : 50=68.08% 00:31:14.935 cpu : usr=90.59%, sys=9.11%, ctx=14, majf=0, minf=257 00:31:14.935 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.935 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.935 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:14.935 00:31:14.935 Run status group 0 (all jobs): 00:31:14.935 READ: bw=566KiB/s (580kB/s), 566KiB/s-566KiB/s (580kB/s-580kB/s), io=5664KiB (5800kB), run=10003-10003msec 00:31:15.501 11:01:32 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:15.501 11:01:32 -- target/dif.sh@43 -- # local sub 00:31:15.501 11:01:32 -- target/dif.sh@45 -- # for sub in "$@" 00:31:15.501 11:01:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:15.501 11:01:32 -- target/dif.sh@36 -- # local sub_id=0 00:31:15.501 11:01:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:15.501 11:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 11:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.501 11:01:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:15.501 11:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 11:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.501 00:31:15.501 real 0m11.059s 00:31:15.501 user 0m10.191s 00:31:15.501 sys 0m1.186s 00:31:15.501 11:01:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 ************************************ 00:31:15.501 END TEST fio_dif_1_default 00:31:15.501 ************************************ 00:31:15.501 11:01:32 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:15.501 11:01:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:15.501 11:01:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 ************************************ 00:31:15.501 START TEST fio_dif_1_multi_subsystems 00:31:15.501 ************************************ 00:31:15.501 11:01:32 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:31:15.501 11:01:32 -- target/dif.sh@92 -- # local files=1 00:31:15.501 11:01:32 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:15.501 11:01:32 -- target/dif.sh@28 -- # local sub 00:31:15.501 11:01:32 -- target/dif.sh@30 -- # for sub in "$@" 00:31:15.501 11:01:32 -- target/dif.sh@31 -- # create_subsystem 0 00:31:15.501 11:01:32 -- target/dif.sh@18 -- # local sub_id=0 00:31:15.501 11:01:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:15.501 11:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 bdev_null0 00:31:15.501 11:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.501 11:01:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:15.501 11:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 11:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.501 11:01:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:15.501 11:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 11:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.501 11:01:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:15.501 11:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 [2024-07-10 11:01:32.104199] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.501 11:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.501 11:01:32 -- target/dif.sh@30 -- # for sub in "$@" 00:31:15.501 11:01:32 -- target/dif.sh@31 -- # create_subsystem 1 00:31:15.501 11:01:32 -- target/dif.sh@18 -- # local sub_id=1 00:31:15.501 11:01:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:15.501 11:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 bdev_null1 00:31:15.501 11:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.501 11:01:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:15.501 11:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 11:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.501 11:01:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:15.501 11:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 11:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.501 11:01:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.501 11:01:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.501 11:01:32 -- common/autotest_common.sh@10 -- # set +x 00:31:15.501 11:01:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.502 11:01:32 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:15.502 11:01:32 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:15.502 11:01:32 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:15.502 11:01:32 -- nvmf/common.sh@520 -- # config=() 00:31:15.502 11:01:32 -- nvmf/common.sh@520 -- # local subsystem config 00:31:15.502 11:01:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:15.502 11:01:32 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:15.502 11:01:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:15.502 { 00:31:15.502 "params": { 00:31:15.502 "name": "Nvme$subsystem", 00:31:15.502 "trtype": "$TEST_TRANSPORT", 00:31:15.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.502 "adrfam": "ipv4", 00:31:15.502 "trsvcid": "$NVMF_PORT", 00:31:15.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.502 "hdgst": ${hdgst:-false}, 00:31:15.502 "ddgst": ${ddgst:-false} 00:31:15.502 }, 00:31:15.502 "method": "bdev_nvme_attach_controller" 00:31:15.502 } 00:31:15.502 EOF 00:31:15.502 )") 00:31:15.502 11:01:32 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:15.502 11:01:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:15.502 11:01:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:15.502 11:01:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:15.502 11:01:32 -- target/dif.sh@82 -- # gen_fio_conf 00:31:15.502 11:01:32 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:15.502 11:01:32 -- target/dif.sh@54 -- # local file 00:31:15.502 11:01:32 -- common/autotest_common.sh@1320 -- # shift 00:31:15.502 11:01:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:15.502 11:01:32 -- target/dif.sh@56 -- # cat 00:31:15.502 11:01:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:15.502 11:01:32 -- nvmf/common.sh@542 -- # cat 00:31:15.502 11:01:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:15.502 11:01:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:15.502 11:01:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:15.502 11:01:32 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:15.502 11:01:32 -- target/dif.sh@72 -- # (( file <= files )) 00:31:15.502 11:01:32 -- target/dif.sh@73 -- # cat 00:31:15.502 11:01:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:15.502 11:01:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:15.502 { 00:31:15.502 "params": { 00:31:15.502 "name": "Nvme$subsystem", 00:31:15.502 "trtype": "$TEST_TRANSPORT", 00:31:15.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.502 "adrfam": "ipv4", 00:31:15.502 "trsvcid": "$NVMF_PORT", 00:31:15.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.502 "hdgst": ${hdgst:-false}, 00:31:15.502 "ddgst": ${ddgst:-false} 00:31:15.502 }, 00:31:15.502 "method": "bdev_nvme_attach_controller" 00:31:15.502 } 00:31:15.502 EOF 00:31:15.502 )") 00:31:15.502 11:01:32 -- nvmf/common.sh@542 -- # cat 00:31:15.502 11:01:32 -- target/dif.sh@72 -- # (( file++ )) 00:31:15.502 11:01:32 -- target/dif.sh@72 -- # (( file <= files )) 00:31:15.502 11:01:32 -- nvmf/common.sh@544 -- # jq . 00:31:15.502 11:01:32 -- nvmf/common.sh@545 -- # IFS=, 00:31:15.502 11:01:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:15.502 "params": { 00:31:15.502 "name": "Nvme0", 00:31:15.502 "trtype": "tcp", 00:31:15.502 "traddr": "10.0.0.2", 00:31:15.502 "adrfam": "ipv4", 00:31:15.502 "trsvcid": "4420", 00:31:15.502 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:15.502 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:15.502 "hdgst": false, 00:31:15.502 "ddgst": false 00:31:15.502 }, 00:31:15.502 "method": "bdev_nvme_attach_controller" 00:31:15.502 },{ 00:31:15.502 "params": { 00:31:15.502 "name": "Nvme1", 00:31:15.502 "trtype": "tcp", 00:31:15.502 "traddr": "10.0.0.2", 00:31:15.502 "adrfam": "ipv4", 00:31:15.502 "trsvcid": "4420", 00:31:15.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:15.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:15.502 "hdgst": false, 00:31:15.502 "ddgst": false 00:31:15.502 }, 00:31:15.502 "method": "bdev_nvme_attach_controller" 00:31:15.502 }' 00:31:15.502 11:01:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:15.502 11:01:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:15.502 11:01:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:15.502 11:01:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:15.502 11:01:32 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:15.502 11:01:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:15.502 11:01:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:15.502 11:01:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:15.502 11:01:32 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:15.502 11:01:32 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:15.760 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:15.760 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:15.760 fio-3.35 00:31:15.760 Starting 2 threads 00:31:15.760 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.323 [2024-07-10 11:01:32.913638] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:16.323 [2024-07-10 11:01:32.913700] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:26.284 00:31:26.284 filename0: (groupid=0, jobs=1): err= 0: pid=3598750: Wed Jul 10 11:01:43 2024 00:31:26.284 read: IOPS=187, BW=751KiB/s (769kB/s)(7536KiB/10033msec) 00:31:26.284 slat (nsec): min=4296, max=71518, avg=10388.89, stdev=5603.73 00:31:26.284 clat (usec): min=795, max=43727, avg=21268.83, stdev=20272.27 00:31:26.284 lat (usec): min=806, max=43753, avg=21279.22, stdev=20270.60 00:31:26.284 clat percentiles (usec): 00:31:26.284 | 1.00th=[ 816], 5.00th=[ 840], 10.00th=[ 848], 20.00th=[ 865], 00:31:26.284 | 30.00th=[ 881], 40.00th=[ 898], 50.00th=[41157], 60.00th=[41157], 00:31:26.284 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:31:26.284 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:31:26.284 | 99.99th=[43779] 00:31:26.284 bw ( KiB/s): min= 704, max= 768, per=49.85%, avg=752.00, stdev=28.43, samples=20 00:31:26.284 iops : min= 176, max= 192, avg=188.00, stdev= 7.11, samples=20 00:31:26.284 lat (usec) : 1000=49.26% 00:31:26.284 lat (msec) : 2=0.42%, 50=50.32% 00:31:26.284 cpu : usr=97.17%, sys=2.55%, ctx=14, majf=0, minf=270 00:31:26.284 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.284 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.284 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:26.284 filename1: (groupid=0, jobs=1): err= 0: pid=3598751: Wed Jul 10 11:01:43 2024 00:31:26.284 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:31:26.284 slat (nsec): min=4546, max=46364, avg=10985.52, stdev=4873.37 00:31:26.284 clat (usec): min=758, max=42882, avg=21024.08, stdev=20160.25 00:31:26.284 lat (usec): min=767, max=42897, avg=21035.06, stdev=20158.95 00:31:26.284 clat percentiles (usec): 00:31:26.284 | 1.00th=[ 766], 5.00th=[ 783], 10.00th=[ 791], 20.00th=[ 807], 00:31:26.284 | 30.00th=[ 824], 40.00th=[ 848], 50.00th=[41157], 60.00th=[41157], 00:31:26.284 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:26.284 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:26.284 | 99.99th=[42730] 00:31:26.284 bw ( KiB/s): min= 704, max= 768, per=50.44%, avg=761.26, stdev=20.18, samples=19 00:31:26.284 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:31:26.284 lat (usec) : 1000=49.11% 00:31:26.284 lat (msec) : 2=0.79%, 50=50.11% 00:31:26.284 cpu : usr=97.62%, sys=2.06%, ctx=14, majf=0, minf=211 00:31:26.284 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.284 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.284 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:26.284 00:31:26.284 Run status group 0 (all jobs): 00:31:26.284 READ: bw=1509KiB/s (1545kB/s), 751KiB/s-760KiB/s (769kB/s-778kB/s), io=14.8MiB (15.5MB), run=10003-10033msec 00:31:26.540 11:01:43 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:26.540 11:01:43 -- target/dif.sh@43 -- # local sub 00:31:26.540 11:01:43 -- target/dif.sh@45 -- # for sub in "$@" 00:31:26.540 11:01:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:26.540 11:01:43 -- target/dif.sh@36 -- # local sub_id=0 00:31:26.540 11:01:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:26.540 11:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.540 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.540 11:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.540 11:01:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:26.540 11:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.540 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.540 11:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.540 11:01:43 -- target/dif.sh@45 -- # for sub in "$@" 00:31:26.540 11:01:43 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:26.540 11:01:43 -- target/dif.sh@36 -- # local sub_id=1 00:31:26.540 11:01:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:26.541 11:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.541 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.798 11:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.798 11:01:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:26.798 11:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.798 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.798 11:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.798 00:31:26.798 real 0m11.301s 00:31:26.798 user 0m20.782s 00:31:26.798 sys 0m0.751s 00:31:26.798 11:01:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:26.798 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.798 ************************************ 00:31:26.798 END TEST fio_dif_1_multi_subsystems 00:31:26.798 ************************************ 00:31:26.798 11:01:43 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:26.798 11:01:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:26.798 11:01:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:26.798 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.798 ************************************ 00:31:26.798 START TEST fio_dif_rand_params 00:31:26.798 ************************************ 00:31:26.798 11:01:43 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:31:26.798 11:01:43 -- target/dif.sh@100 -- # local NULL_DIF 00:31:26.798 11:01:43 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:26.798 11:01:43 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:26.798 11:01:43 -- target/dif.sh@103 -- # bs=128k 00:31:26.798 11:01:43 -- target/dif.sh@103 -- # numjobs=3 00:31:26.798 11:01:43 -- target/dif.sh@103 -- # iodepth=3 00:31:26.798 11:01:43 -- target/dif.sh@103 -- # runtime=5 00:31:26.798 11:01:43 -- target/dif.sh@105 -- # create_subsystems 0 00:31:26.798 11:01:43 -- target/dif.sh@28 -- # local sub 00:31:26.798 11:01:43 -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.798 11:01:43 -- target/dif.sh@31 -- # create_subsystem 0 00:31:26.798 11:01:43 -- target/dif.sh@18 -- # local sub_id=0 00:31:26.798 11:01:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:26.798 11:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.798 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.798 bdev_null0 00:31:26.798 11:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.798 11:01:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:26.798 11:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.798 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.798 11:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.798 11:01:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:26.798 11:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.798 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.798 11:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.798 11:01:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.798 11:01:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.798 11:01:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.798 [2024-07-10 11:01:43.429153] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.798 11:01:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.798 11:01:43 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:26.798 11:01:43 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:26.798 11:01:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:26.798 11:01:43 -- nvmf/common.sh@520 -- # config=() 00:31:26.798 11:01:43 -- nvmf/common.sh@520 -- # local subsystem config 00:31:26.798 11:01:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:26.798 11:01:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:26.798 { 00:31:26.798 "params": { 00:31:26.798 "name": "Nvme$subsystem", 00:31:26.798 "trtype": "$TEST_TRANSPORT", 00:31:26.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.798 "adrfam": "ipv4", 00:31:26.798 "trsvcid": "$NVMF_PORT", 00:31:26.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.798 "hdgst": ${hdgst:-false}, 00:31:26.798 "ddgst": ${ddgst:-false} 00:31:26.798 }, 00:31:26.798 "method": "bdev_nvme_attach_controller" 00:31:26.798 } 00:31:26.798 EOF 00:31:26.798 )") 00:31:26.798 11:01:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.798 11:01:43 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.798 11:01:43 -- target/dif.sh@82 -- # gen_fio_conf 00:31:26.798 11:01:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:26.798 11:01:43 -- target/dif.sh@54 -- # local file 00:31:26.798 11:01:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.798 11:01:43 -- target/dif.sh@56 -- # cat 00:31:26.798 11:01:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:26.798 11:01:43 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.798 11:01:43 -- common/autotest_common.sh@1320 -- # shift 00:31:26.798 11:01:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:26.798 11:01:43 -- nvmf/common.sh@542 -- # cat 00:31:26.798 11:01:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.798 11:01:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.798 11:01:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:26.798 11:01:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:26.798 11:01:43 -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.798 11:01:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:26.798 11:01:43 -- nvmf/common.sh@544 -- # jq . 00:31:26.798 11:01:43 -- nvmf/common.sh@545 -- # IFS=, 00:31:26.798 11:01:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:26.798 "params": { 00:31:26.798 "name": "Nvme0", 00:31:26.798 "trtype": "tcp", 00:31:26.798 "traddr": "10.0.0.2", 00:31:26.798 "adrfam": "ipv4", 00:31:26.798 "trsvcid": "4420", 00:31:26.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:26.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:26.798 "hdgst": false, 00:31:26.798 "ddgst": false 00:31:26.798 }, 00:31:26.798 "method": "bdev_nvme_attach_controller" 00:31:26.798 }' 00:31:26.798 11:01:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:26.798 11:01:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:26.798 11:01:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.798 11:01:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.798 11:01:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:26.798 11:01:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:26.798 11:01:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:26.798 11:01:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:26.798 11:01:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:26.798 11:01:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.055 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:27.055 ... 00:31:27.055 fio-3.35 00:31:27.055 Starting 3 threads 00:31:27.055 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.313 [2024-07-10 11:01:44.058187] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:27.313 [2024-07-10 11:01:44.058261] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:32.576 00:31:32.576 filename0: (groupid=0, jobs=1): err= 0: pid=3600160: Wed Jul 10 11:01:49 2024 00:31:32.576 read: IOPS=233, BW=29.2MiB/s (30.7MB/s)(148MiB/5046msec) 00:31:32.576 slat (nsec): min=4341, max=29019, avg=12896.65, stdev=1781.11 00:31:32.576 clat (usec): min=4938, max=90513, avg=12774.34, stdev=12104.98 00:31:32.576 lat (usec): min=4951, max=90527, avg=12787.24, stdev=12104.91 00:31:32.576 clat percentiles (usec): 00:31:32.576 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 6718], 00:31:32.576 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[10159], 00:31:32.576 | 70.00th=[11076], 80.00th=[12387], 90.00th=[14615], 95.00th=[49021], 00:31:32.576 | 99.00th=[52691], 99.50th=[53216], 99.90th=[89654], 99.95th=[90702], 00:31:32.576 | 99.99th=[90702] 00:31:32.576 bw ( KiB/s): min=19200, max=45312, per=39.25%, avg=30156.80, stdev=7648.46, samples=10 00:31:32.576 iops : min= 150, max= 354, avg=235.60, stdev=59.75, samples=10 00:31:32.576 lat (msec) : 10=58.05%, 20=33.05%, 50=5.34%, 100=3.56% 00:31:32.576 cpu : usr=93.20%, sys=6.32%, ctx=9, majf=0, minf=96 00:31:32.576 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.576 issued rwts: total=1180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.576 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:32.576 filename0: (groupid=0, jobs=1): err= 0: pid=3600162: Wed Jul 10 11:01:49 2024 00:31:32.576 read: IOPS=180, BW=22.6MiB/s (23.7MB/s)(114MiB/5046msec) 00:31:32.576 slat (nsec): min=4468, max=46192, avg=13226.66, stdev=2193.14 00:31:32.576 clat (usec): min=5891, max=92104, avg=16529.90, stdev=14671.04 00:31:32.576 lat (usec): min=5903, max=92118, avg=16543.13, stdev=14670.94 00:31:32.576 clat percentiles (usec): 00:31:32.576 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7439], 20.00th=[ 8717], 00:31:32.576 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[11207], 60.00th=[11994], 00:31:32.576 | 70.00th=[12780], 80.00th=[13829], 90.00th=[49021], 95.00th=[51119], 00:31:32.576 | 99.00th=[53216], 99.50th=[54264], 99.90th=[91751], 99.95th=[91751], 00:31:32.576 | 99.99th=[91751] 00:31:32.576 bw ( KiB/s): min=16128, max=29184, per=30.32%, avg=23296.00, stdev=4364.53, samples=10 00:31:32.576 iops : min= 126, max= 228, avg=182.00, stdev=34.10, samples=10 00:31:32.576 lat (msec) : 10=36.40%, 20=48.46%, 50=6.80%, 100=8.33% 00:31:32.576 cpu : usr=93.80%, sys=5.77%, ctx=11, majf=0, minf=135 00:31:32.576 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.576 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.576 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:32.576 filename0: (groupid=0, jobs=1): err= 0: pid=3600163: Wed Jul 10 11:01:49 2024 00:31:32.576 read: IOPS=185, BW=23.2MiB/s (24.4MB/s)(117MiB/5043msec) 00:31:32.576 slat (nsec): min=4375, max=27181, avg=13499.64, stdev=2248.70 00:31:32.576 clat (usec): min=4842, max=88665, avg=16127.32, stdev=14756.95 00:31:32.576 lat (usec): min=4855, max=88678, avg=16140.82, stdev=14757.00 00:31:32.576 clat percentiles (usec): 00:31:32.576 | 1.00th=[ 5407], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 7898], 00:31:32.576 | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[10945], 60.00th=[11863], 00:31:32.576 | 70.00th=[12780], 80.00th=[14484], 90.00th=[49546], 95.00th=[51119], 00:31:32.576 | 99.00th=[53740], 99.50th=[54789], 99.90th=[88605], 99.95th=[88605], 00:31:32.576 | 99.99th=[88605] 00:31:32.576 bw ( KiB/s): min=17920, max=30208, per=31.12%, avg=23915.00, stdev=3983.53, samples=10 00:31:32.576 iops : min= 140, max= 236, avg=186.80, stdev=31.13, samples=10 00:31:32.576 lat (msec) : 10=43.01%, 20=42.05%, 50=7.15%, 100=7.79% 00:31:32.576 cpu : usr=94.21%, sys=5.34%, ctx=7, majf=0, minf=75 00:31:32.576 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.576 issued rwts: total=937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.576 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:32.576 00:31:32.576 Run status group 0 (all jobs): 00:31:32.576 READ: bw=75.0MiB/s (78.7MB/s), 22.6MiB/s-29.2MiB/s (23.7MB/s-30.7MB/s), io=379MiB (397MB), run=5043-5046msec 00:31:32.834 11:01:49 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:32.834 11:01:49 -- target/dif.sh@43 -- # local sub 00:31:32.834 11:01:49 -- target/dif.sh@45 -- # for sub in "$@" 00:31:32.834 11:01:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:32.834 11:01:49 -- target/dif.sh@36 -- # local sub_id=0 00:31:32.834 11:01:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:32.834 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.834 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.834 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.834 11:01:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:32.834 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.834 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.834 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.834 11:01:49 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:32.834 11:01:49 -- target/dif.sh@109 -- # bs=4k 00:31:32.834 11:01:49 -- target/dif.sh@109 -- # numjobs=8 00:31:32.834 11:01:49 -- target/dif.sh@109 -- # iodepth=16 00:31:32.835 11:01:49 -- target/dif.sh@109 -- # runtime= 00:31:32.835 11:01:49 -- target/dif.sh@109 -- # files=2 00:31:32.835 11:01:49 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:32.835 11:01:49 -- target/dif.sh@28 -- # local sub 00:31:32.835 11:01:49 -- target/dif.sh@30 -- # for sub in "$@" 00:31:32.835 11:01:49 -- target/dif.sh@31 -- # create_subsystem 0 00:31:32.835 11:01:49 -- target/dif.sh@18 -- # local sub_id=0 00:31:32.835 11:01:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 bdev_null0 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 [2024-07-10 11:01:49.554438] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@30 -- # for sub in "$@" 00:31:32.835 11:01:49 -- target/dif.sh@31 -- # create_subsystem 1 00:31:32.835 11:01:49 -- target/dif.sh@18 -- # local sub_id=1 00:31:32.835 11:01:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 bdev_null1 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@30 -- # for sub in "$@" 00:31:32.835 11:01:49 -- target/dif.sh@31 -- # create_subsystem 2 00:31:32.835 11:01:49 -- target/dif.sh@18 -- # local sub_id=2 00:31:32.835 11:01:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 bdev_null2 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:32.835 11:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.835 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:32.835 11:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.835 11:01:49 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:32.835 11:01:49 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:32.835 11:01:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:32.835 11:01:49 -- nvmf/common.sh@520 -- # config=() 00:31:32.835 11:01:49 -- nvmf/common.sh@520 -- # local subsystem config 00:31:32.835 11:01:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:32.835 11:01:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:32.835 11:01:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:32.835 { 00:31:32.835 "params": { 00:31:32.835 "name": "Nvme$subsystem", 00:31:32.835 "trtype": "$TEST_TRANSPORT", 00:31:32.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.835 "adrfam": "ipv4", 00:31:32.835 "trsvcid": "$NVMF_PORT", 00:31:32.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.835 "hdgst": ${hdgst:-false}, 00:31:32.835 "ddgst": ${ddgst:-false} 00:31:32.835 }, 00:31:32.835 "method": "bdev_nvme_attach_controller" 00:31:32.835 } 00:31:32.835 EOF 00:31:32.835 )") 00:31:32.835 11:01:49 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:32.835 11:01:49 -- target/dif.sh@82 -- # gen_fio_conf 00:31:32.835 11:01:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:32.835 11:01:49 -- target/dif.sh@54 -- # local file 00:31:32.835 11:01:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:32.835 11:01:49 -- target/dif.sh@56 -- # cat 00:31:32.835 11:01:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:32.835 11:01:49 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.835 11:01:49 -- common/autotest_common.sh@1320 -- # shift 00:31:32.835 11:01:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:32.835 11:01:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:32.835 11:01:49 -- nvmf/common.sh@542 -- # cat 00:31:32.835 11:01:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.835 11:01:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:32.835 11:01:49 -- target/dif.sh@72 -- # (( file <= files )) 00:31:32.835 11:01:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:32.835 11:01:49 -- target/dif.sh@73 -- # cat 00:31:32.835 11:01:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:32.835 11:01:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:32.835 11:01:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:32.835 { 00:31:32.835 "params": { 00:31:32.835 "name": "Nvme$subsystem", 00:31:32.835 "trtype": "$TEST_TRANSPORT", 00:31:32.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.835 "adrfam": "ipv4", 00:31:32.835 "trsvcid": "$NVMF_PORT", 00:31:32.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.835 "hdgst": ${hdgst:-false}, 00:31:32.835 "ddgst": ${ddgst:-false} 00:31:32.835 }, 00:31:32.835 "method": "bdev_nvme_attach_controller" 00:31:32.835 } 00:31:32.835 EOF 00:31:32.835 )") 00:31:32.835 11:01:49 -- nvmf/common.sh@542 -- # cat 00:31:32.835 11:01:49 -- target/dif.sh@72 -- # (( file++ )) 00:31:32.835 11:01:49 -- target/dif.sh@72 -- # (( file <= files )) 00:31:32.835 11:01:49 -- target/dif.sh@73 -- # cat 00:31:32.835 11:01:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:32.835 11:01:49 -- target/dif.sh@72 -- # (( file++ )) 00:31:32.835 11:01:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:32.835 { 00:31:32.835 "params": { 00:31:32.835 "name": "Nvme$subsystem", 00:31:32.835 "trtype": "$TEST_TRANSPORT", 00:31:32.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.835 "adrfam": "ipv4", 00:31:32.835 "trsvcid": "$NVMF_PORT", 00:31:32.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.835 "hdgst": ${hdgst:-false}, 00:31:32.835 "ddgst": ${ddgst:-false} 00:31:32.835 }, 00:31:32.835 "method": "bdev_nvme_attach_controller" 00:31:32.835 } 00:31:32.835 EOF 00:31:32.835 )") 00:31:32.835 11:01:49 -- target/dif.sh@72 -- # (( file <= files )) 00:31:32.835 11:01:49 -- nvmf/common.sh@542 -- # cat 00:31:32.835 11:01:49 -- nvmf/common.sh@544 -- # jq . 00:31:32.835 11:01:49 -- nvmf/common.sh@545 -- # IFS=, 00:31:32.835 11:01:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:32.835 "params": { 00:31:32.835 "name": "Nvme0", 00:31:32.835 "trtype": "tcp", 00:31:32.835 "traddr": "10.0.0.2", 00:31:32.835 "adrfam": "ipv4", 00:31:32.835 "trsvcid": "4420", 00:31:32.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:32.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:32.835 "hdgst": false, 00:31:32.835 "ddgst": false 00:31:32.835 }, 00:31:32.835 "method": "bdev_nvme_attach_controller" 00:31:32.835 },{ 00:31:32.835 "params": { 00:31:32.835 "name": "Nvme1", 00:31:32.835 "trtype": "tcp", 00:31:32.836 "traddr": "10.0.0.2", 00:31:32.836 "adrfam": "ipv4", 00:31:32.836 "trsvcid": "4420", 00:31:32.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:32.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:32.836 "hdgst": false, 00:31:32.836 "ddgst": false 00:31:32.836 }, 00:31:32.836 "method": "bdev_nvme_attach_controller" 00:31:32.836 },{ 00:31:32.836 "params": { 00:31:32.836 "name": "Nvme2", 00:31:32.836 "trtype": "tcp", 00:31:32.836 "traddr": "10.0.0.2", 00:31:32.836 "adrfam": "ipv4", 00:31:32.836 "trsvcid": "4420", 00:31:32.836 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:32.836 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:32.836 "hdgst": false, 00:31:32.836 "ddgst": false 00:31:32.836 }, 00:31:32.836 "method": "bdev_nvme_attach_controller" 00:31:32.836 }' 00:31:32.836 11:01:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:32.836 11:01:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:32.836 11:01:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:32.836 11:01:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.836 11:01:49 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:32.836 11:01:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:33.094 11:01:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:33.094 11:01:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:33.094 11:01:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:33.094 11:01:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:33.094 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:33.094 ... 00:31:33.094 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:33.094 ... 00:31:33.094 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:33.094 ... 00:31:33.094 fio-3.35 00:31:33.094 Starting 24 threads 00:31:33.094 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.031 [2024-07-10 11:01:50.705236] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:34.031 [2024-07-10 11:01:50.705320] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:46.287 00:31:46.287 filename0: (groupid=0, jobs=1): err= 0: pid=3600956: Wed Jul 10 11:02:00 2024 00:31:46.287 read: IOPS=361, BW=1445KiB/s (1480kB/s)(14.1MiB/10025msec) 00:31:46.287 slat (usec): min=3, max=153, avg=34.82, stdev=23.78 00:31:46.287 clat (msec): min=6, max=305, avg=44.01, stdev=50.24 00:31:46.287 lat (msec): min=6, max=305, avg=44.04, stdev=50.25 00:31:46.287 clat percentiles (msec): 00:31:46.287 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:31:46.287 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.287 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 211], 00:31:46.287 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.287 | 99.99th=[ 305] 00:31:46.287 bw ( KiB/s): min= 240, max= 2096, per=4.17%, avg=1442.20, stdev=832.51, samples=20 00:31:46.287 iops : min= 60, max= 524, avg=360.55, stdev=208.13, samples=20 00:31:46.287 lat (msec) : 10=0.08%, 20=0.39%, 50=92.93%, 100=0.41%, 250=3.98% 00:31:46.287 lat (msec) : 500=2.21% 00:31:46.287 cpu : usr=96.33%, sys=2.08%, ctx=87, majf=0, minf=36 00:31:46.287 IO depths : 1=2.7%, 2=7.2%, 4=18.2%, 8=62.0%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:46.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.287 complete : 0=0.0%, 4=92.7%, 8=1.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.287 issued rwts: total=3622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.287 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.287 filename0: (groupid=0, jobs=1): err= 0: pid=3600957: Wed Jul 10 11:02:00 2024 00:31:46.287 read: IOPS=359, BW=1439KiB/s (1473kB/s)(14.1MiB/10009msec) 00:31:46.287 slat (usec): min=8, max=114, avg=40.78, stdev=12.62 00:31:46.287 clat (msec): min=26, max=305, avg=44.13, stdev=51.57 00:31:46.287 lat (msec): min=26, max=305, avg=44.17, stdev=51.57 00:31:46.287 clat percentiles (msec): 00:31:46.287 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.287 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.287 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 213], 00:31:46.287 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.287 | 99.99th=[ 305] 00:31:46.287 bw ( KiB/s): min= 256, max= 2048, per=4.07%, avg=1407.74, stdev=844.28, samples=19 00:31:46.287 iops : min= 64, max= 512, avg=351.89, stdev=211.04, samples=19 00:31:46.287 lat (msec) : 50=93.78%, 100=0.44%, 250=3.11%, 500=2.67% 00:31:46.287 cpu : usr=96.96%, sys=1.85%, ctx=39, majf=0, minf=28 00:31:46.287 IO depths : 1=5.9%, 2=12.0%, 4=24.6%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:46.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=3600958: Wed Jul 10 11:02:00 2024 00:31:46.288 read: IOPS=360, BW=1443KiB/s (1478kB/s)(14.1MiB/10022msec) 00:31:46.288 slat (usec): min=4, max=478, avg=46.17, stdev=22.74 00:31:46.288 clat (msec): min=26, max=305, avg=43.95, stdev=50.13 00:31:46.288 lat (msec): min=26, max=305, avg=44.00, stdev=50.12 00:31:46.288 clat percentiles (msec): 00:31:46.288 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.288 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.288 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 213], 00:31:46.288 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.288 | 99.99th=[ 305] 00:31:46.288 bw ( KiB/s): min= 256, max= 2048, per=4.16%, avg=1439.75, stdev=830.25, samples=20 00:31:46.288 iops : min= 64, max= 512, avg=359.90, stdev=207.53, samples=20 00:31:46.288 lat (msec) : 50=93.36%, 100=0.44%, 250=3.98%, 500=2.21% 00:31:46.288 cpu : usr=92.36%, sys=3.84%, ctx=118, majf=0, minf=31 00:31:46.288 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=3600959: Wed Jul 10 11:02:00 2024 00:31:46.288 read: IOPS=364, BW=1458KiB/s (1492kB/s)(14.3MiB/10028msec) 00:31:46.288 slat (usec): min=3, max=180, avg=18.47, stdev=13.92 00:31:46.288 clat (msec): min=6, max=322, avg=43.76, stdev=50.05 00:31:46.288 lat (msec): min=6, max=322, avg=43.78, stdev=50.06 00:31:46.288 clat percentiles (msec): 00:31:46.288 | 1.00th=[ 22], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:31:46.288 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.288 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 211], 00:31:46.288 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 317], 99.95th=[ 321], 00:31:46.288 | 99.99th=[ 321] 00:31:46.288 bw ( KiB/s): min= 240, max= 2224, per=4.21%, avg=1455.00, stdev=843.82, samples=20 00:31:46.288 iops : min= 60, max= 556, avg=363.75, stdev=210.96, samples=20 00:31:46.288 lat (msec) : 10=0.33%, 20=0.52%, 50=92.58%, 100=0.44%, 250=3.94% 00:31:46.288 lat (msec) : 500=2.19% 00:31:46.288 cpu : usr=98.07%, sys=1.27%, ctx=21, majf=0, minf=31 00:31:46.288 IO depths : 1=4.2%, 2=9.6%, 4=21.9%, 8=56.0%, 16=8.4%, 32=0.0%, >=64=0.0% 00:31:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 issued rwts: total=3654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=3600960: Wed Jul 10 11:02:00 2024 00:31:46.288 read: IOPS=360, BW=1443KiB/s (1478kB/s)(14.1MiB/10022msec) 00:31:46.288 slat (usec): min=7, max=133, avg=37.95, stdev=19.20 00:31:46.288 clat (msec): min=13, max=321, avg=43.99, stdev=50.18 00:31:46.288 lat (msec): min=13, max=321, avg=44.03, stdev=50.18 00:31:46.288 clat percentiles (msec): 00:31:46.288 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.288 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.288 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 205], 00:31:46.288 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 321], 00:31:46.288 | 99.99th=[ 321] 00:31:46.288 bw ( KiB/s): min= 256, max= 2048, per=4.16%, avg=1439.75, stdev=830.25, samples=20 00:31:46.288 iops : min= 64, max= 512, avg=359.90, stdev=207.53, samples=20 00:31:46.288 lat (msec) : 20=0.44%, 50=92.92%, 100=0.44%, 250=3.93%, 500=2.27% 00:31:46.288 cpu : usr=95.27%, sys=2.49%, ctx=119, majf=0, minf=29 00:31:46.288 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=3600961: Wed Jul 10 11:02:00 2024 00:31:46.288 read: IOPS=361, BW=1446KiB/s (1480kB/s)(14.1MiB/10005msec) 00:31:46.288 slat (usec): min=8, max=748, avg=46.24, stdev=23.62 00:31:46.288 clat (msec): min=14, max=304, avg=43.86, stdev=51.76 00:31:46.288 lat (msec): min=14, max=304, avg=43.90, stdev=51.77 00:31:46.288 clat percentiles (msec): 00:31:46.288 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.288 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.288 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 213], 00:31:46.288 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.288 | 99.99th=[ 305] 00:31:46.288 bw ( KiB/s): min= 256, max= 2048, per=4.07%, avg=1408.00, stdev=844.64, samples=19 00:31:46.288 iops : min= 64, max= 512, avg=352.00, stdev=211.16, samples=19 00:31:46.288 lat (msec) : 20=0.50%, 50=93.42%, 100=0.33%, 250=3.10%, 500=2.65% 00:31:46.288 cpu : usr=93.36%, sys=3.31%, ctx=154, majf=0, minf=25 00:31:46.288 IO depths : 1=4.4%, 2=10.5%, 4=24.7%, 8=52.2%, 16=8.2%, 32=0.0%, >=64=0.0% 00:31:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=3600962: Wed Jul 10 11:02:00 2024 00:31:46.288 read: IOPS=359, BW=1440KiB/s (1474kB/s)(14.1MiB/10002msec) 00:31:46.288 slat (usec): min=10, max=207, avg=49.87, stdev=30.70 00:31:46.288 clat (msec): min=11, max=424, avg=43.99, stdev=53.12 00:31:46.288 lat (msec): min=11, max=424, avg=44.04, stdev=53.11 00:31:46.288 clat percentiles (msec): 00:31:46.288 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.288 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.288 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 215], 00:31:46.288 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 414], 99.95th=[ 426], 00:31:46.288 | 99.99th=[ 426] 00:31:46.288 bw ( KiB/s): min= 128, max= 2064, per=4.07%, avg=1407.79, stdev=846.71, samples=19 00:31:46.288 iops : min= 32, max= 516, avg=351.95, stdev=211.68, samples=19 00:31:46.288 lat (msec) : 20=0.58%, 50=92.97%, 100=0.72%, 250=3.00%, 500=2.72% 00:31:46.288 cpu : usr=91.49%, sys=4.04%, ctx=110, majf=0, minf=41 00:31:46.288 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.288 filename0: (groupid=0, jobs=1): err= 0: pid=3600963: Wed Jul 10 11:02:00 2024 00:31:46.288 read: IOPS=364, BW=1458KiB/s (1493kB/s)(14.2MiB/10005msec) 00:31:46.288 slat (usec): min=8, max=113, avg=36.02, stdev=21.76 00:31:46.288 clat (msec): min=11, max=456, avg=43.61, stdev=52.54 00:31:46.288 lat (msec): min=11, max=456, avg=43.64, stdev=52.54 00:31:46.288 clat percentiles (msec): 00:31:46.288 | 1.00th=[ 18], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.288 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.288 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 215], 00:31:46.288 | 99.00th=[ 284], 99.50th=[ 305], 99.90th=[ 393], 99.95th=[ 456], 00:31:46.288 | 99.99th=[ 456] 00:31:46.288 bw ( KiB/s): min= 240, max= 2208, per=4.11%, avg=1420.63, stdev=856.24, samples=19 00:31:46.288 iops : min= 60, max= 552, avg=355.16, stdev=214.06, samples=19 00:31:46.288 lat (msec) : 20=2.72%, 50=90.65%, 100=0.99%, 250=2.96%, 500=2.69% 00:31:46.288 cpu : usr=96.79%, sys=2.02%, ctx=361, majf=0, minf=44 00:31:46.288 IO depths : 1=2.0%, 2=7.7%, 4=22.9%, 8=56.7%, 16=10.7%, 32=0.0%, >=64=0.0% 00:31:46.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.288 issued rwts: total=3646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.288 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.288 filename1: (groupid=0, jobs=1): err= 0: pid=3600964: Wed Jul 10 11:02:00 2024 00:31:46.288 read: IOPS=359, BW=1439KiB/s (1473kB/s)(14.1MiB/10009msec) 00:31:46.288 slat (nsec): min=5238, max=93654, avg=42323.19, stdev=15025.50 00:31:46.288 clat (msec): min=24, max=305, avg=44.13, stdev=51.53 00:31:46.288 lat (msec): min=24, max=305, avg=44.17, stdev=51.53 00:31:46.288 clat percentiles (msec): 00:31:46.288 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.288 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.288 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 213], 00:31:46.288 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.288 | 99.99th=[ 305] 00:31:46.288 bw ( KiB/s): min= 256, max= 2064, per=4.07%, avg=1407.74, stdev=844.56, samples=19 00:31:46.288 iops : min= 64, max= 516, avg=351.89, stdev=211.11, samples=19 00:31:46.288 lat (msec) : 50=93.78%, 100=0.44%, 250=3.11%, 500=2.67% 00:31:46.288 cpu : usr=96.54%, sys=2.02%, ctx=498, majf=0, minf=24 00:31:46.288 IO depths : 1=5.1%, 2=11.1%, 4=24.1%, 8=52.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:31:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=3600965: Wed Jul 10 11:02:00 2024 00:31:46.289 read: IOPS=359, BW=1440KiB/s (1474kB/s)(14.1MiB/10008msec) 00:31:46.289 slat (usec): min=4, max=129, avg=39.37, stdev=17.63 00:31:46.289 clat (msec): min=13, max=304, avg=44.13, stdev=51.94 00:31:46.289 lat (msec): min=13, max=304, avg=44.17, stdev=51.95 00:31:46.289 clat percentiles (msec): 00:31:46.289 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:31:46.289 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.289 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 213], 00:31:46.289 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.289 | 99.99th=[ 305] 00:31:46.289 bw ( KiB/s): min= 128, max= 2048, per=4.05%, avg=1402.11, stdev=841.29, samples=19 00:31:46.289 iops : min= 32, max= 512, avg=350.53, stdev=210.32, samples=19 00:31:46.289 lat (msec) : 20=1.22%, 50=92.09%, 100=0.92%, 250=3.11%, 500=2.67% 00:31:46.289 cpu : usr=98.40%, sys=1.15%, ctx=17, majf=0, minf=31 00:31:46.289 IO depths : 1=2.7%, 2=8.6%, 4=23.6%, 8=55.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 issued rwts: total=3602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=3600966: Wed Jul 10 11:02:00 2024 00:31:46.289 read: IOPS=359, BW=1439KiB/s (1473kB/s)(14.1MiB/10009msec) 00:31:46.289 slat (usec): min=9, max=453, avg=39.37, stdev=16.73 00:31:46.289 clat (msec): min=19, max=344, avg=44.15, stdev=51.73 00:31:46.289 lat (msec): min=20, max=344, avg=44.19, stdev=51.73 00:31:46.289 clat percentiles (msec): 00:31:46.289 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:31:46.289 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.289 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 213], 00:31:46.289 | 99.00th=[ 279], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 347], 00:31:46.289 | 99.99th=[ 347] 00:31:46.289 bw ( KiB/s): min= 240, max= 2048, per=4.07%, avg=1407.74, stdev=844.56, samples=19 00:31:46.289 iops : min= 60, max= 512, avg=351.89, stdev=211.11, samples=19 00:31:46.289 lat (msec) : 20=0.03%, 50=93.75%, 100=0.44%, 250=3.17%, 500=2.61% 00:31:46.289 cpu : usr=97.34%, sys=1.55%, ctx=34, majf=0, minf=28 00:31:46.289 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=3600967: Wed Jul 10 11:02:00 2024 00:31:46.289 read: IOPS=360, BW=1442KiB/s (1476kB/s)(14.1MiB/10005msec) 00:31:46.289 slat (usec): min=12, max=159, avg=41.84, stdev=21.36 00:31:46.289 clat (msec): min=13, max=468, avg=43.99, stdev=54.12 00:31:46.289 lat (msec): min=13, max=468, avg=44.03, stdev=54.12 00:31:46.289 clat percentiles (msec): 00:31:46.289 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.289 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.289 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 213], 00:31:46.289 | 99.00th=[ 292], 99.50th=[ 363], 99.90th=[ 401], 99.95th=[ 468], 00:31:46.289 | 99.99th=[ 468] 00:31:46.289 bw ( KiB/s): min= 128, max= 2048, per=4.06%, avg=1403.79, stdev=853.40, samples=19 00:31:46.289 iops : min= 32, max= 512, avg=350.95, stdev=213.35, samples=19 00:31:46.289 lat (msec) : 20=0.61%, 50=93.62%, 250=3.11%, 500=2.66% 00:31:46.289 cpu : usr=95.76%, sys=2.01%, ctx=248, majf=0, minf=25 00:31:46.289 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 issued rwts: total=3606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=3600968: Wed Jul 10 11:02:00 2024 00:31:46.289 read: IOPS=360, BW=1443KiB/s (1477kB/s)(14.1MiB/10027msec) 00:31:46.289 slat (usec): min=8, max=257, avg=36.51, stdev=24.53 00:31:46.289 clat (msec): min=28, max=305, avg=44.03, stdev=50.17 00:31:46.289 lat (msec): min=28, max=305, avg=44.07, stdev=50.16 00:31:46.289 clat percentiles (msec): 00:31:46.289 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.289 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.289 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 213], 00:31:46.289 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.289 | 99.99th=[ 305] 00:31:46.289 bw ( KiB/s): min= 256, max= 2048, per=4.16%, avg=1439.80, stdev=830.32, samples=20 00:31:46.289 iops : min= 64, max= 512, avg=359.95, stdev=207.58, samples=20 00:31:46.289 lat (msec) : 50=93.36%, 100=0.44%, 250=3.98%, 500=2.21% 00:31:46.289 cpu : usr=95.17%, sys=2.50%, ctx=223, majf=0, minf=26 00:31:46.289 IO depths : 1=5.6%, 2=11.8%, 4=24.7%, 8=51.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=3600969: Wed Jul 10 11:02:00 2024 00:31:46.289 read: IOPS=361, BW=1445KiB/s (1480kB/s)(14.1MiB/10009msec) 00:31:46.289 slat (usec): min=8, max=160, avg=44.78, stdev=17.59 00:31:46.289 clat (msec): min=13, max=352, avg=43.87, stdev=51.54 00:31:46.289 lat (msec): min=14, max=352, avg=43.92, stdev=51.54 00:31:46.289 clat percentiles (msec): 00:31:46.289 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.289 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.289 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 215], 00:31:46.289 | 99.00th=[ 284], 99.50th=[ 305], 99.90th=[ 321], 99.95th=[ 351], 00:31:46.289 | 99.99th=[ 351] 00:31:46.289 bw ( KiB/s): min= 240, max= 2048, per=4.07%, avg=1408.00, stdev=844.77, samples=19 00:31:46.289 iops : min= 60, max= 512, avg=352.00, stdev=211.19, samples=19 00:31:46.289 lat (msec) : 20=0.44%, 50=93.36%, 100=0.44%, 250=3.15%, 500=2.60% 00:31:46.289 cpu : usr=98.00%, sys=1.41%, ctx=35, majf=0, minf=26 00:31:46.289 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=3600970: Wed Jul 10 11:02:00 2024 00:31:46.289 read: IOPS=360, BW=1443KiB/s (1478kB/s)(14.1MiB/10023msec) 00:31:46.289 slat (usec): min=7, max=162, avg=32.74, stdev=22.65 00:31:46.289 clat (msec): min=23, max=305, avg=44.06, stdev=50.05 00:31:46.289 lat (msec): min=23, max=305, avg=44.09, stdev=50.06 00:31:46.289 clat percentiles (msec): 00:31:46.289 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:31:46.289 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.289 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 211], 00:31:46.289 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.289 | 99.99th=[ 305] 00:31:46.289 bw ( KiB/s): min= 256, max= 2048, per=4.16%, avg=1439.75, stdev=830.25, samples=20 00:31:46.289 iops : min= 64, max= 512, avg=359.90, stdev=207.53, samples=20 00:31:46.289 lat (msec) : 50=93.36%, 100=0.44%, 250=3.98%, 500=2.21% 00:31:46.289 cpu : usr=97.00%, sys=1.60%, ctx=58, majf=0, minf=30 00:31:46.289 IO depths : 1=4.0%, 2=10.2%, 4=24.8%, 8=52.5%, 16=8.5%, 32=0.0%, >=64=0.0% 00:31:46.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.289 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.289 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.289 filename1: (groupid=0, jobs=1): err= 0: pid=3600971: Wed Jul 10 11:02:00 2024 00:31:46.289 read: IOPS=361, BW=1445KiB/s (1479kB/s)(14.1MiB/10012msec) 00:31:46.289 slat (usec): min=11, max=190, avg=50.46, stdev=32.92 00:31:46.289 clat (msec): min=9, max=424, avg=43.78, stdev=53.10 00:31:46.289 lat (msec): min=9, max=424, avg=43.83, stdev=53.09 00:31:46.289 clat percentiles (msec): 00:31:46.289 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.289 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.289 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 32], 95.00th=[ 215], 00:31:46.289 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 422], 99.95th=[ 426], 00:31:46.289 | 99.99th=[ 426] 00:31:46.289 bw ( KiB/s): min= 128, max= 2048, per=4.07%, avg=1408.00, stdev=845.83, samples=19 00:31:46.289 iops : min= 32, max= 512, avg=352.00, stdev=211.46, samples=19 00:31:46.289 lat (msec) : 10=0.06%, 20=0.44%, 50=93.36%, 100=0.44%, 250=2.99% 00:31:46.289 lat (msec) : 500=2.71% 00:31:46.290 cpu : usr=96.43%, sys=2.06%, ctx=213, majf=0, minf=23 00:31:46.290 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=3600972: Wed Jul 10 11:02:00 2024 00:31:46.290 read: IOPS=358, BW=1434KiB/s (1468kB/s)(14.0MiB/10006msec) 00:31:46.290 slat (nsec): min=3890, max=88803, avg=32415.93, stdev=18147.25 00:31:46.290 clat (msec): min=3, max=403, avg=44.41, stdev=54.04 00:31:46.290 lat (msec): min=3, max=403, avg=44.44, stdev=54.03 00:31:46.290 clat percentiles (msec): 00:31:46.290 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:31:46.290 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.290 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 35], 95.00th=[ 213], 00:31:46.290 | 99.00th=[ 284], 99.50th=[ 305], 99.90th=[ 405], 99.95th=[ 405], 00:31:46.290 | 99.99th=[ 405] 00:31:46.290 bw ( KiB/s): min= 128, max= 2104, per=4.03%, avg=1395.37, stdev=847.26, samples=19 00:31:46.290 iops : min= 32, max= 526, avg=348.84, stdev=211.81, samples=19 00:31:46.290 lat (msec) : 4=0.03%, 10=0.08%, 20=0.89%, 50=92.69%, 100=0.56% 00:31:46.290 lat (msec) : 250=3.01%, 500=2.73% 00:31:46.290 cpu : usr=97.88%, sys=1.47%, ctx=106, majf=0, minf=26 00:31:46.290 IO depths : 1=0.1%, 2=4.7%, 4=18.9%, 8=62.7%, 16=13.6%, 32=0.0%, >=64=0.0% 00:31:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 complete : 0=0.0%, 4=93.1%, 8=2.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 issued rwts: total=3586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=3600973: Wed Jul 10 11:02:00 2024 00:31:46.290 read: IOPS=361, BW=1448KiB/s (1482kB/s)(14.1MiB/10008msec) 00:31:46.290 slat (usec): min=8, max=110, avg=40.77, stdev=14.20 00:31:46.290 clat (msec): min=14, max=303, avg=43.86, stdev=51.84 00:31:46.290 lat (msec): min=14, max=303, avg=43.90, stdev=51.84 00:31:46.290 clat percentiles (msec): 00:31:46.290 | 1.00th=[ 21], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.290 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.290 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 213], 00:31:46.290 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.290 | 99.99th=[ 305] 00:31:46.290 bw ( KiB/s): min= 128, max= 2096, per=4.08%, avg=1410.53, stdev=847.92, samples=19 00:31:46.290 iops : min= 32, max= 524, avg=352.63, stdev=211.98, samples=19 00:31:46.290 lat (msec) : 20=0.61%, 50=93.59%, 100=0.06%, 250=3.09%, 500=2.65% 00:31:46.290 cpu : usr=98.18%, sys=1.32%, ctx=57, majf=0, minf=27 00:31:46.290 IO depths : 1=4.7%, 2=10.7%, 4=23.9%, 8=52.8%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 issued rwts: total=3622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=3600974: Wed Jul 10 11:02:00 2024 00:31:46.290 read: IOPS=359, BW=1440KiB/s (1474kB/s)(14.1MiB/10002msec) 00:31:46.290 slat (nsec): min=4071, max=89092, avg=39965.37, stdev=11895.10 00:31:46.290 clat (msec): min=21, max=424, avg=44.09, stdev=53.10 00:31:46.290 lat (msec): min=21, max=425, avg=44.13, stdev=53.10 00:31:46.290 clat percentiles (msec): 00:31:46.290 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.290 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.290 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 215], 00:31:46.290 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 414], 99.95th=[ 426], 00:31:46.290 | 99.99th=[ 426] 00:31:46.290 bw ( KiB/s): min= 128, max= 2048, per=4.07%, avg=1407.79, stdev=845.60, samples=19 00:31:46.290 iops : min= 32, max= 512, avg=351.95, stdev=211.40, samples=19 00:31:46.290 lat (msec) : 50=94.22%, 100=0.11%, 250=2.94%, 500=2.72% 00:31:46.290 cpu : usr=98.47%, sys=1.02%, ctx=19, majf=0, minf=39 00:31:46.290 IO depths : 1=5.8%, 2=12.0%, 4=24.8%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=3600975: Wed Jul 10 11:02:00 2024 00:31:46.290 read: IOPS=361, BW=1446KiB/s (1481kB/s)(14.2MiB/10027msec) 00:31:46.290 slat (usec): min=8, max=752, avg=37.29, stdev=33.06 00:31:46.290 clat (msec): min=6, max=305, avg=43.95, stdev=50.22 00:31:46.290 lat (msec): min=6, max=305, avg=43.98, stdev=50.22 00:31:46.290 clat percentiles (msec): 00:31:46.290 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:31:46.290 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.290 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 213], 00:31:46.290 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.290 | 99.99th=[ 305] 00:31:46.290 bw ( KiB/s): min= 256, max= 2048, per=4.17%, avg=1443.40, stdev=832.44, samples=20 00:31:46.290 iops : min= 64, max= 512, avg=360.85, stdev=208.11, samples=20 00:31:46.290 lat (msec) : 10=0.44%, 20=0.39%, 50=91.97%, 100=1.02%, 250=3.97% 00:31:46.290 lat (msec) : 500=2.21% 00:31:46.290 cpu : usr=95.44%, sys=2.34%, ctx=82, majf=0, minf=34 00:31:46.290 IO depths : 1=3.6%, 2=8.7%, 4=20.7%, 8=58.1%, 16=8.9%, 32=0.0%, >=64=0.0% 00:31:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 issued rwts: total=3625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=3600976: Wed Jul 10 11:02:00 2024 00:31:46.290 read: IOPS=360, BW=1443KiB/s (1478kB/s)(14.1MiB/10024msec) 00:31:46.290 slat (usec): min=8, max=132, avg=32.02, stdev=19.37 00:31:46.290 clat (msec): min=28, max=321, avg=44.08, stdev=50.17 00:31:46.290 lat (msec): min=28, max=321, avg=44.12, stdev=50.17 00:31:46.290 clat percentiles (msec): 00:31:46.290 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:31:46.290 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.290 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 211], 00:31:46.290 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 317], 99.95th=[ 321], 00:31:46.290 | 99.99th=[ 321] 00:31:46.290 bw ( KiB/s): min= 256, max= 2048, per=4.16%, avg=1439.75, stdev=830.25, samples=20 00:31:46.290 iops : min= 64, max= 512, avg=359.90, stdev=207.53, samples=20 00:31:46.290 lat (msec) : 50=93.36%, 100=0.44%, 250=3.98%, 500=2.21% 00:31:46.290 cpu : usr=98.21%, sys=1.19%, ctx=58, majf=0, minf=28 00:31:46.290 IO depths : 1=4.9%, 2=10.9%, 4=24.0%, 8=52.6%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=3600977: Wed Jul 10 11:02:00 2024 00:31:46.290 read: IOPS=359, BW=1439KiB/s (1473kB/s)(14.1MiB/10009msec) 00:31:46.290 slat (usec): min=7, max=151, avg=33.11, stdev=18.74 00:31:46.290 clat (msec): min=27, max=305, avg=44.22, stdev=51.57 00:31:46.290 lat (msec): min=27, max=305, avg=44.25, stdev=51.57 00:31:46.290 clat percentiles (msec): 00:31:46.290 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:31:46.290 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.290 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 215], 00:31:46.290 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.290 | 99.99th=[ 305] 00:31:46.290 bw ( KiB/s): min= 256, max= 2048, per=4.07%, avg=1407.74, stdev=844.55, samples=19 00:31:46.290 iops : min= 64, max= 512, avg=351.89, stdev=211.11, samples=19 00:31:46.290 lat (msec) : 50=93.78%, 100=0.44%, 250=3.11%, 500=2.67% 00:31:46.290 cpu : usr=98.38%, sys=1.22%, ctx=15, majf=0, minf=40 00:31:46.290 IO depths : 1=3.0%, 2=9.2%, 4=24.8%, 8=53.5%, 16=9.5%, 32=0.0%, >=64=0.0% 00:31:46.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.290 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.290 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.290 filename2: (groupid=0, jobs=1): err= 0: pid=3600978: Wed Jul 10 11:02:00 2024 00:31:46.290 read: IOPS=359, BW=1436KiB/s (1471kB/s)(14.0MiB/10005msec) 00:31:46.290 slat (usec): min=7, max=123, avg=35.77, stdev=15.51 00:31:46.290 clat (msec): min=9, max=468, avg=44.28, stdev=54.14 00:31:46.290 lat (msec): min=9, max=468, avg=44.31, stdev=54.15 00:31:46.290 clat percentiles (msec): 00:31:46.290 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:31:46.290 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.290 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 35], 95.00th=[ 213], 00:31:46.290 | 99.00th=[ 300], 99.50th=[ 355], 99.90th=[ 401], 99.95th=[ 468], 00:31:46.290 | 99.99th=[ 468] 00:31:46.290 bw ( KiB/s): min= 128, max= 2064, per=4.04%, avg=1397.89, stdev=848.93, samples=19 00:31:46.291 iops : min= 32, max= 516, avg=349.47, stdev=212.23, samples=19 00:31:46.291 lat (msec) : 10=0.19%, 20=1.28%, 50=92.15%, 100=0.58%, 250=3.12% 00:31:46.291 lat (msec) : 500=2.67% 00:31:46.291 cpu : usr=98.24%, sys=1.18%, ctx=27, majf=0, minf=37 00:31:46.291 IO depths : 1=2.1%, 2=7.1%, 4=20.7%, 8=59.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:31:46.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.291 complete : 0=0.0%, 4=93.3%, 8=1.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.291 issued rwts: total=3592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.291 filename2: (groupid=0, jobs=1): err= 0: pid=3600979: Wed Jul 10 11:02:00 2024 00:31:46.291 read: IOPS=360, BW=1441KiB/s (1476kB/s)(14.1MiB/10009msec) 00:31:46.291 slat (usec): min=6, max=188, avg=34.28, stdev=18.51 00:31:46.291 clat (msec): min=16, max=305, avg=44.14, stdev=51.55 00:31:46.291 lat (msec): min=16, max=305, avg=44.17, stdev=51.55 00:31:46.291 clat percentiles (msec): 00:31:46.291 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:31:46.291 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:31:46.291 | 70.00th=[ 32], 80.00th=[ 32], 90.00th=[ 33], 95.00th=[ 215], 00:31:46.291 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:31:46.291 | 99.99th=[ 305] 00:31:46.291 bw ( KiB/s): min= 256, max= 2096, per=4.15%, avg=1435.75, stdev=831.79, samples=20 00:31:46.291 iops : min= 64, max= 524, avg=358.90, stdev=207.92, samples=20 00:31:46.291 lat (msec) : 20=0.22%, 50=93.57%, 100=0.44%, 250=3.11%, 500=2.66% 00:31:46.291 cpu : usr=97.06%, sys=1.73%, ctx=56, majf=0, minf=33 00:31:46.291 IO depths : 1=1.5%, 2=7.4%, 4=23.8%, 8=56.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:31:46.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.291 complete : 0=0.0%, 4=94.1%, 8=0.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.291 issued rwts: total=3606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.291 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:46.291 00:31:46.291 Run status group 0 (all jobs): 00:31:46.291 READ: bw=33.8MiB/s (35.4MB/s), 1434KiB/s-1458KiB/s (1468kB/s-1493kB/s), io=339MiB (355MB), run=10002-10028msec 00:31:46.291 11:02:01 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:46.291 11:02:01 -- target/dif.sh@43 -- # local sub 00:31:46.291 11:02:01 -- target/dif.sh@45 -- # for sub in "$@" 00:31:46.291 11:02:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:46.291 11:02:01 -- target/dif.sh@36 -- # local sub_id=0 00:31:46.291 11:02:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@45 -- # for sub in "$@" 00:31:46.291 11:02:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:46.291 11:02:01 -- target/dif.sh@36 -- # local sub_id=1 00:31:46.291 11:02:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@45 -- # for sub in "$@" 00:31:46.291 11:02:01 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:46.291 11:02:01 -- target/dif.sh@36 -- # local sub_id=2 00:31:46.291 11:02:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:46.291 11:02:01 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:46.291 11:02:01 -- target/dif.sh@115 -- # numjobs=2 00:31:46.291 11:02:01 -- target/dif.sh@115 -- # iodepth=8 00:31:46.291 11:02:01 -- target/dif.sh@115 -- # runtime=5 00:31:46.291 11:02:01 -- target/dif.sh@115 -- # files=1 00:31:46.291 11:02:01 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:46.291 11:02:01 -- target/dif.sh@28 -- # local sub 00:31:46.291 11:02:01 -- target/dif.sh@30 -- # for sub in "$@" 00:31:46.291 11:02:01 -- target/dif.sh@31 -- # create_subsystem 0 00:31:46.291 11:02:01 -- target/dif.sh@18 -- # local sub_id=0 00:31:46.291 11:02:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 bdev_null0 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 [2024-07-10 11:02:01.173962] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@30 -- # for sub in "$@" 00:31:46.291 11:02:01 -- target/dif.sh@31 -- # create_subsystem 1 00:31:46.291 11:02:01 -- target/dif.sh@18 -- # local sub_id=1 00:31:46.291 11:02:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 bdev_null1 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.291 11:02:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.291 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:31:46.291 11:02:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.291 11:02:01 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:46.291 11:02:01 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:46.291 11:02:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:46.291 11:02:01 -- nvmf/common.sh@520 -- # config=() 00:31:46.291 11:02:01 -- nvmf/common.sh@520 -- # local subsystem config 00:31:46.291 11:02:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:46.291 11:02:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.291 11:02:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:46.291 { 00:31:46.291 "params": { 00:31:46.291 "name": "Nvme$subsystem", 00:31:46.291 "trtype": "$TEST_TRANSPORT", 00:31:46.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:46.291 "adrfam": "ipv4", 00:31:46.291 "trsvcid": "$NVMF_PORT", 00:31:46.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:46.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:46.291 "hdgst": ${hdgst:-false}, 00:31:46.291 "ddgst": ${ddgst:-false} 00:31:46.291 }, 00:31:46.291 "method": "bdev_nvme_attach_controller" 00:31:46.291 } 00:31:46.291 EOF 00:31:46.291 )") 00:31:46.291 11:02:01 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.291 11:02:01 -- target/dif.sh@82 -- # gen_fio_conf 00:31:46.291 11:02:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:46.291 11:02:01 -- target/dif.sh@54 -- # local file 00:31:46.291 11:02:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:46.291 11:02:01 -- target/dif.sh@56 -- # cat 00:31:46.291 11:02:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:46.292 11:02:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.292 11:02:01 -- common/autotest_common.sh@1320 -- # shift 00:31:46.292 11:02:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:46.292 11:02:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.292 11:02:01 -- nvmf/common.sh@542 -- # cat 00:31:46.292 11:02:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.292 11:02:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:46.292 11:02:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:46.292 11:02:01 -- target/dif.sh@72 -- # (( file <= files )) 00:31:46.292 11:02:01 -- target/dif.sh@73 -- # cat 00:31:46.292 11:02:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:46.292 11:02:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:46.292 11:02:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:46.292 { 00:31:46.292 "params": { 00:31:46.292 "name": "Nvme$subsystem", 00:31:46.292 "trtype": "$TEST_TRANSPORT", 00:31:46.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:46.292 "adrfam": "ipv4", 00:31:46.292 "trsvcid": "$NVMF_PORT", 00:31:46.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:46.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:46.292 "hdgst": ${hdgst:-false}, 00:31:46.292 "ddgst": ${ddgst:-false} 00:31:46.292 }, 00:31:46.292 "method": "bdev_nvme_attach_controller" 00:31:46.292 } 00:31:46.292 EOF 00:31:46.292 )") 00:31:46.292 11:02:01 -- nvmf/common.sh@542 -- # cat 00:31:46.292 11:02:01 -- target/dif.sh@72 -- # (( file++ )) 00:31:46.292 11:02:01 -- target/dif.sh@72 -- # (( file <= files )) 00:31:46.292 11:02:01 -- nvmf/common.sh@544 -- # jq . 00:31:46.292 11:02:01 -- nvmf/common.sh@545 -- # IFS=, 00:31:46.292 11:02:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:46.292 "params": { 00:31:46.292 "name": "Nvme0", 00:31:46.292 "trtype": "tcp", 00:31:46.292 "traddr": "10.0.0.2", 00:31:46.292 "adrfam": "ipv4", 00:31:46.292 "trsvcid": "4420", 00:31:46.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:46.292 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:46.292 "hdgst": false, 00:31:46.292 "ddgst": false 00:31:46.292 }, 00:31:46.292 "method": "bdev_nvme_attach_controller" 00:31:46.292 },{ 00:31:46.292 "params": { 00:31:46.292 "name": "Nvme1", 00:31:46.292 "trtype": "tcp", 00:31:46.292 "traddr": "10.0.0.2", 00:31:46.292 "adrfam": "ipv4", 00:31:46.292 "trsvcid": "4420", 00:31:46.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:46.292 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:46.292 "hdgst": false, 00:31:46.292 "ddgst": false 00:31:46.292 }, 00:31:46.292 "method": "bdev_nvme_attach_controller" 00:31:46.292 }' 00:31:46.292 11:02:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:46.292 11:02:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:46.292 11:02:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:46.292 11:02:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:46.292 11:02:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:46.292 11:02:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:46.292 11:02:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:46.292 11:02:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:46.292 11:02:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:46.292 11:02:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:46.292 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:46.292 ... 00:31:46.292 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:46.292 ... 00:31:46.292 fio-3.35 00:31:46.292 Starting 4 threads 00:31:46.292 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.292 [2024-07-10 11:02:02.052225] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:46.292 [2024-07-10 11:02:02.052287] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:50.473 00:31:50.473 filename0: (groupid=0, jobs=1): err= 0: pid=3602407: Wed Jul 10 11:02:07 2024 00:31:50.473 read: IOPS=1761, BW=13.8MiB/s (14.4MB/s)(68.8MiB/5002msec) 00:31:50.473 slat (nsec): min=5259, max=66663, avg=13762.95, stdev=7783.61 00:31:50.473 clat (usec): min=994, max=8106, avg=4501.05, stdev=649.74 00:31:50.473 lat (usec): min=1006, max=8122, avg=4514.82, stdev=650.12 00:31:50.473 clat percentiles (usec): 00:31:50.473 | 1.00th=[ 2933], 5.00th=[ 3556], 10.00th=[ 3851], 20.00th=[ 4113], 00:31:50.473 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4555], 00:31:50.473 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5080], 95.00th=[ 5604], 00:31:50.473 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 8029], 99.95th=[ 8094], 00:31:50.473 | 99.99th=[ 8094] 00:31:50.473 bw ( KiB/s): min=13424, max=15168, per=25.13%, avg=14089.60, stdev=565.14, samples=10 00:31:50.473 iops : min= 1678, max= 1896, avg=1761.40, stdev=70.73, samples=10 00:31:50.473 lat (usec) : 1000=0.01% 00:31:50.473 lat (msec) : 2=0.17%, 4=13.70%, 10=86.12% 00:31:50.473 cpu : usr=93.82%, sys=5.70%, ctx=9, majf=0, minf=0 00:31:50.473 IO depths : 1=0.1%, 2=4.2%, 4=64.3%, 8=31.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.473 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.473 issued rwts: total=8811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.473 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:50.473 filename0: (groupid=0, jobs=1): err= 0: pid=3602408: Wed Jul 10 11:02:07 2024 00:31:50.473 read: IOPS=1763, BW=13.8MiB/s (14.4MB/s)(69.5MiB/5044msec) 00:31:50.473 slat (nsec): min=6247, max=64372, avg=15993.95, stdev=8103.64 00:31:50.473 clat (usec): min=1581, max=46953, avg=4479.79, stdev=1324.84 00:31:50.473 lat (usec): min=1615, max=46985, avg=4495.78, stdev=1324.86 00:31:50.473 clat percentiles (usec): 00:31:50.473 | 1.00th=[ 2900], 5.00th=[ 3490], 10.00th=[ 3752], 20.00th=[ 4047], 00:31:50.473 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4555], 00:31:50.473 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5080], 95.00th=[ 5604], 00:31:50.473 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 8160], 99.95th=[46924], 00:31:50.473 | 99.99th=[46924] 00:31:50.473 bw ( KiB/s): min=13792, max=14560, per=25.37%, avg=14224.00, stdev=234.30, samples=10 00:31:50.473 iops : min= 1724, max= 1820, avg=1778.00, stdev=29.29, samples=10 00:31:50.473 lat (msec) : 2=0.09%, 4=17.08%, 10=82.75%, 50=0.08% 00:31:50.473 cpu : usr=94.33%, sys=5.16%, ctx=14, majf=0, minf=9 00:31:50.473 IO depths : 1=0.2%, 2=6.8%, 4=64.5%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.473 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.473 issued rwts: total=8897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.473 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:50.473 filename1: (groupid=0, jobs=1): err= 0: pid=3602409: Wed Jul 10 11:02:07 2024 00:31:50.473 read: IOPS=1766, BW=13.8MiB/s (14.5MB/s)(69.1MiB/5004msec) 00:31:50.473 slat (nsec): min=5264, max=66623, avg=13480.11, stdev=7790.20 00:31:50.473 clat (usec): min=1091, max=8179, avg=4485.22, stdev=776.82 00:31:50.473 lat (usec): min=1103, max=8192, avg=4498.70, stdev=776.62 00:31:50.473 clat percentiles (usec): 00:31:50.473 | 1.00th=[ 2868], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 3949], 00:31:50.473 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4490], 00:31:50.473 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5473], 95.00th=[ 6325], 00:31:50.474 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 7832], 99.95th=[ 7898], 00:31:50.474 | 99.99th=[ 8160] 00:31:50.474 bw ( KiB/s): min=13152, max=14832, per=25.21%, avg=14132.40, stdev=576.83, samples=10 00:31:50.474 iops : min= 1644, max= 1854, avg=1766.50, stdev=72.10, samples=10 00:31:50.474 lat (msec) : 2=0.05%, 4=21.84%, 10=78.12% 00:31:50.474 cpu : usr=94.14%, sys=5.38%, ctx=9, majf=0, minf=0 00:31:50.474 IO depths : 1=0.2%, 2=4.9%, 4=67.5%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.474 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.474 issued rwts: total=8839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.474 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:50.474 filename1: (groupid=0, jobs=1): err= 0: pid=3602410: Wed Jul 10 11:02:07 2024 00:31:50.474 read: IOPS=1759, BW=13.7MiB/s (14.4MB/s)(68.8MiB/5001msec) 00:31:50.474 slat (nsec): min=5221, max=66641, avg=13295.14, stdev=7872.89 00:31:50.474 clat (usec): min=1043, max=8002, avg=4501.98, stdev=785.23 00:31:50.474 lat (usec): min=1061, max=8017, avg=4515.27, stdev=784.87 00:31:50.474 clat percentiles (usec): 00:31:50.474 | 1.00th=[ 3064], 5.00th=[ 3490], 10.00th=[ 3687], 20.00th=[ 3982], 00:31:50.474 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4490], 00:31:50.474 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5538], 95.00th=[ 6325], 00:31:50.474 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 7701], 99.95th=[ 7701], 00:31:50.474 | 99.99th=[ 8029] 00:31:50.474 bw ( KiB/s): min=13691, max=14832, per=25.23%, avg=14145.22, stdev=450.51, samples=9 00:31:50.474 iops : min= 1711, max= 1854, avg=1768.11, stdev=56.36, samples=9 00:31:50.474 lat (msec) : 2=0.12%, 4=21.41%, 10=78.47% 00:31:50.474 cpu : usr=93.88%, sys=5.64%, ctx=8, majf=0, minf=0 00:31:50.474 IO depths : 1=0.1%, 2=5.5%, 4=67.0%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.474 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.474 issued rwts: total=8801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.474 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:50.474 00:31:50.474 Run status group 0 (all jobs): 00:31:50.474 READ: bw=54.7MiB/s (57.4MB/s), 13.7MiB/s-13.8MiB/s (14.4MB/s-14.5MB/s), io=276MiB (290MB), run=5001-5044msec 00:31:50.732 11:02:07 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:50.732 11:02:07 -- target/dif.sh@43 -- # local sub 00:31:50.732 11:02:07 -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.732 11:02:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:50.732 11:02:07 -- target/dif.sh@36 -- # local sub_id=0 00:31:50.732 11:02:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:50.732 11:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.732 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:31:50.732 11:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.732 11:02:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:50.732 11:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.732 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:31:50.732 11:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.732 11:02:07 -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.732 11:02:07 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:50.732 11:02:07 -- target/dif.sh@36 -- # local sub_id=1 00:31:50.732 11:02:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:50.732 11:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.732 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:31:50.732 11:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.732 11:02:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:50.732 11:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.732 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:31:50.732 11:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.732 00:31:50.732 real 0m24.043s 00:31:50.732 user 4m29.474s 00:31:50.732 sys 0m7.560s 00:31:50.732 11:02:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:50.732 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:31:50.732 ************************************ 00:31:50.732 END TEST fio_dif_rand_params 00:31:50.732 ************************************ 00:31:50.732 11:02:07 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:50.732 11:02:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:50.732 11:02:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:50.732 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:31:50.732 ************************************ 00:31:50.732 START TEST fio_dif_digest 00:31:50.732 ************************************ 00:31:50.732 11:02:07 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:31:50.732 11:02:07 -- target/dif.sh@123 -- # local NULL_DIF 00:31:50.732 11:02:07 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:50.732 11:02:07 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:50.732 11:02:07 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:50.732 11:02:07 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:50.732 11:02:07 -- target/dif.sh@127 -- # numjobs=3 00:31:50.732 11:02:07 -- target/dif.sh@127 -- # iodepth=3 00:31:50.732 11:02:07 -- target/dif.sh@127 -- # runtime=10 00:31:50.732 11:02:07 -- target/dif.sh@128 -- # hdgst=true 00:31:50.732 11:02:07 -- target/dif.sh@128 -- # ddgst=true 00:31:50.732 11:02:07 -- target/dif.sh@130 -- # create_subsystems 0 00:31:50.732 11:02:07 -- target/dif.sh@28 -- # local sub 00:31:50.732 11:02:07 -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.732 11:02:07 -- target/dif.sh@31 -- # create_subsystem 0 00:31:50.732 11:02:07 -- target/dif.sh@18 -- # local sub_id=0 00:31:50.732 11:02:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:50.732 11:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.732 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:31:50.732 bdev_null0 00:31:50.732 11:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.732 11:02:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:50.732 11:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.732 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:31:50.732 11:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.732 11:02:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:50.732 11:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.732 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:31:50.732 11:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.732 11:02:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.732 11:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.732 11:02:07 -- common/autotest_common.sh@10 -- # set +x 00:31:50.732 [2024-07-10 11:02:07.508696] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.732 11:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.732 11:02:07 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:50.732 11:02:07 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:50.732 11:02:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:50.732 11:02:07 -- nvmf/common.sh@520 -- # config=() 00:31:50.732 11:02:07 -- nvmf/common.sh@520 -- # local subsystem config 00:31:50.732 11:02:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:50.732 11:02:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:50.732 { 00:31:50.732 "params": { 00:31:50.732 "name": "Nvme$subsystem", 00:31:50.732 "trtype": "$TEST_TRANSPORT", 00:31:50.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.732 "adrfam": "ipv4", 00:31:50.732 "trsvcid": "$NVMF_PORT", 00:31:50.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.732 "hdgst": ${hdgst:-false}, 00:31:50.732 "ddgst": ${ddgst:-false} 00:31:50.732 }, 00:31:50.732 "method": "bdev_nvme_attach_controller" 00:31:50.732 } 00:31:50.732 EOF 00:31:50.732 )") 00:31:50.732 11:02:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.732 11:02:07 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.732 11:02:07 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:50.732 11:02:07 -- target/dif.sh@82 -- # gen_fio_conf 00:31:50.732 11:02:07 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.732 11:02:07 -- target/dif.sh@54 -- # local file 00:31:50.732 11:02:07 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:50.732 11:02:07 -- target/dif.sh@56 -- # cat 00:31:50.732 11:02:07 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.732 11:02:07 -- common/autotest_common.sh@1320 -- # shift 00:31:50.732 11:02:07 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:50.732 11:02:07 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.732 11:02:07 -- nvmf/common.sh@542 -- # cat 00:31:50.732 11:02:07 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.732 11:02:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:50.732 11:02:07 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:50.732 11:02:07 -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.732 11:02:07 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:50.732 11:02:07 -- nvmf/common.sh@544 -- # jq . 00:31:50.732 11:02:07 -- nvmf/common.sh@545 -- # IFS=, 00:31:50.732 11:02:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:50.732 "params": { 00:31:50.732 "name": "Nvme0", 00:31:50.732 "trtype": "tcp", 00:31:50.732 "traddr": "10.0.0.2", 00:31:50.732 "adrfam": "ipv4", 00:31:50.732 "trsvcid": "4420", 00:31:50.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:50.732 "hdgst": true, 00:31:50.732 "ddgst": true 00:31:50.732 }, 00:31:50.732 "method": "bdev_nvme_attach_controller" 00:31:50.732 }' 00:31:50.732 11:02:07 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:50.732 11:02:07 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:50.732 11:02:07 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.732 11:02:07 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.732 11:02:07 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:50.732 11:02:07 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:50.733 11:02:07 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:50.733 11:02:07 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:50.733 11:02:07 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:50.733 11:02:07 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.990 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:50.990 ... 00:31:50.990 fio-3.35 00:31:50.990 Starting 3 threads 00:31:50.990 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.556 [2024-07-10 11:02:08.259779] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:51.556 [2024-07-10 11:02:08.259835] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:03.751 00:32:03.751 filename0: (groupid=0, jobs=1): err= 0: pid=3603298: Wed Jul 10 11:02:18 2024 00:32:03.751 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(260MiB/10047msec) 00:32:03.751 slat (nsec): min=7214, max=46216, avg=17053.95, stdev=4091.29 00:32:03.751 clat (usec): min=8323, max=57007, avg=14433.63, stdev=2786.14 00:32:03.751 lat (usec): min=8337, max=57025, avg=14450.69, stdev=2786.17 00:32:03.751 clat percentiles (usec): 00:32:03.751 | 1.00th=[ 9765], 5.00th=[11994], 10.00th=[12911], 20.00th=[13566], 00:32:03.751 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:32:03.751 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:32:03.751 | 99.00th=[17171], 99.50th=[18744], 99.90th=[56361], 99.95th=[56886], 00:32:03.751 | 99.99th=[56886] 00:32:03.751 bw ( KiB/s): min=24064, max=28160, per=33.04%, avg=26624.00, stdev=1070.12, samples=20 00:32:03.751 iops : min= 188, max= 220, avg=208.00, stdev= 8.36, samples=20 00:32:03.751 lat (msec) : 10=1.49%, 20=98.13%, 50=0.10%, 100=0.29% 00:32:03.751 cpu : usr=94.35%, sys=5.18%, ctx=55, majf=0, minf=204 00:32:03.751 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.751 issued rwts: total=2082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.751 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:03.751 filename0: (groupid=0, jobs=1): err= 0: pid=3603299: Wed Jul 10 11:02:18 2024 00:32:03.751 read: IOPS=207, BW=25.9MiB/s (27.1MB/s)(260MiB/10048msec) 00:32:03.751 slat (nsec): min=8241, max=63340, avg=16882.69, stdev=4096.14 00:32:03.751 clat (usec): min=8759, max=56408, avg=14443.90, stdev=4397.43 00:32:03.751 lat (usec): min=8778, max=56426, avg=14460.78, stdev=4397.33 00:32:03.751 clat percentiles (usec): 00:32:03.751 | 1.00th=[ 9896], 5.00th=[12125], 10.00th=[12649], 20.00th=[13173], 00:32:03.751 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:32:03.751 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15401], 95.00th=[15926], 00:32:03.751 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55837], 99.95th=[56361], 00:32:03.751 | 99.99th=[56361] 00:32:03.751 bw ( KiB/s): min=24832, max=28672, per=33.03%, avg=26611.20, stdev=1293.34, samples=20 00:32:03.751 iops : min= 194, max= 224, avg=207.90, stdev=10.10, samples=20 00:32:03.751 lat (msec) : 10=1.20%, 20=97.65%, 50=0.14%, 100=1.01% 00:32:03.751 cpu : usr=95.20%, sys=4.33%, ctx=22, majf=0, minf=207 00:32:03.751 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.751 issued rwts: total=2081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.751 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:03.751 filename0: (groupid=0, jobs=1): err= 0: pid=3603300: Wed Jul 10 11:02:18 2024 00:32:03.751 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(270MiB/10046msec) 00:32:03.751 slat (nsec): min=7471, max=65026, avg=16097.98, stdev=3937.19 00:32:03.751 clat (usec): min=7299, max=56083, avg=13898.25, stdev=2349.29 00:32:03.751 lat (usec): min=7318, max=56096, avg=13914.34, stdev=2349.30 00:32:03.751 clat percentiles (usec): 00:32:03.751 | 1.00th=[ 9241], 5.00th=[10945], 10.00th=[12125], 20.00th=[12911], 00:32:03.751 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:32:03.751 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[15926], 00:32:03.751 | 99.00th=[16712], 99.50th=[17171], 99.90th=[54264], 99.95th=[54264], 00:32:03.751 | 99.99th=[55837] 00:32:03.751 bw ( KiB/s): min=24832, max=29184, per=34.32%, avg=27650.60, stdev=1147.28, samples=20 00:32:03.751 iops : min= 194, max= 228, avg=216.00, stdev= 8.99, samples=20 00:32:03.751 lat (msec) : 10=3.01%, 20=96.76%, 50=0.09%, 100=0.14% 00:32:03.751 cpu : usr=94.33%, sys=5.15%, ctx=45, majf=0, minf=158 00:32:03.751 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.751 issued rwts: total=2162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.751 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:03.751 00:32:03.751 Run status group 0 (all jobs): 00:32:03.751 READ: bw=78.7MiB/s (82.5MB/s), 25.9MiB/s-26.9MiB/s (27.1MB/s-28.2MB/s), io=791MiB (829MB), run=10046-10048msec 00:32:03.751 11:02:18 -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:03.751 11:02:18 -- target/dif.sh@43 -- # local sub 00:32:03.751 11:02:18 -- target/dif.sh@45 -- # for sub in "$@" 00:32:03.751 11:02:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:03.751 11:02:18 -- target/dif.sh@36 -- # local sub_id=0 00:32:03.751 11:02:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:03.751 11:02:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.751 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:32:03.751 11:02:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.751 11:02:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:03.751 11:02:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.751 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:32:03.751 11:02:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.751 00:32:03.751 real 0m11.147s 00:32:03.751 user 0m29.537s 00:32:03.751 sys 0m1.765s 00:32:03.751 11:02:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.751 11:02:18 -- common/autotest_common.sh@10 -- # set +x 00:32:03.751 ************************************ 00:32:03.751 END TEST fio_dif_digest 00:32:03.751 ************************************ 00:32:03.751 11:02:18 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:03.751 11:02:18 -- target/dif.sh@147 -- # nvmftestfini 00:32:03.751 11:02:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:03.751 11:02:18 -- nvmf/common.sh@116 -- # sync 00:32:03.751 11:02:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:03.751 11:02:18 -- nvmf/common.sh@119 -- # set +e 00:32:03.751 11:02:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:03.751 11:02:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:03.751 rmmod nvme_tcp 00:32:03.751 rmmod nvme_fabrics 00:32:03.751 rmmod nvme_keyring 00:32:03.752 11:02:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:03.752 11:02:18 -- nvmf/common.sh@123 -- # set -e 00:32:03.752 11:02:18 -- nvmf/common.sh@124 -- # return 0 00:32:03.752 11:02:18 -- nvmf/common.sh@477 -- # '[' -n 3596956 ']' 00:32:03.752 11:02:18 -- nvmf/common.sh@478 -- # killprocess 3596956 00:32:03.752 11:02:18 -- common/autotest_common.sh@926 -- # '[' -z 3596956 ']' 00:32:03.752 11:02:18 -- common/autotest_common.sh@930 -- # kill -0 3596956 00:32:03.752 11:02:18 -- common/autotest_common.sh@931 -- # uname 00:32:03.752 11:02:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:03.752 11:02:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3596956 00:32:03.752 11:02:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:03.752 11:02:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:03.752 11:02:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3596956' 00:32:03.752 killing process with pid 3596956 00:32:03.752 11:02:18 -- common/autotest_common.sh@945 -- # kill 3596956 00:32:03.752 11:02:18 -- common/autotest_common.sh@950 -- # wait 3596956 00:32:03.752 11:02:18 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:03.752 11:02:18 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:03.752 Waiting for block devices as requested 00:32:03.752 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:03.752 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:03.752 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:03.752 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:03.752 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:03.752 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:03.752 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:04.009 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:04.009 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:04.009 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:04.009 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:04.267 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:04.267 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:04.267 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:04.525 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:04.525 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:04.525 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:04.784 11:02:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:04.784 11:02:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:04.784 11:02:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:04.784 11:02:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:04.784 11:02:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.784 11:02:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:04.784 11:02:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.687 11:02:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:06.687 00:32:06.687 real 1m6.837s 00:32:06.687 user 6m26.920s 00:32:06.687 sys 0m18.137s 00:32:06.687 11:02:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.687 11:02:23 -- common/autotest_common.sh@10 -- # set +x 00:32:06.687 ************************************ 00:32:06.687 END TEST nvmf_dif 00:32:06.687 ************************************ 00:32:06.687 11:02:23 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:06.687 11:02:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:06.687 11:02:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:06.687 11:02:23 -- common/autotest_common.sh@10 -- # set +x 00:32:06.687 ************************************ 00:32:06.687 START TEST nvmf_abort_qd_sizes 00:32:06.687 ************************************ 00:32:06.687 11:02:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:06.687 * Looking for test storage... 00:32:06.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:06.687 11:02:23 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.687 11:02:23 -- nvmf/common.sh@7 -- # uname -s 00:32:06.687 11:02:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.687 11:02:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.687 11:02:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.687 11:02:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.687 11:02:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.687 11:02:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.687 11:02:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.687 11:02:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.687 11:02:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.687 11:02:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.687 11:02:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:06.687 11:02:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:06.687 11:02:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.687 11:02:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.687 11:02:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.687 11:02:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.687 11:02:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.687 11:02:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.687 11:02:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.687 11:02:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.687 11:02:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.687 11:02:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.687 11:02:23 -- paths/export.sh@5 -- # export PATH 00:32:06.687 11:02:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.687 11:02:23 -- nvmf/common.sh@46 -- # : 0 00:32:06.687 11:02:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:06.687 11:02:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:06.687 11:02:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:06.687 11:02:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.687 11:02:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.687 11:02:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:06.687 11:02:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:06.687 11:02:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:06.687 11:02:23 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:32:06.687 11:02:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:06.687 11:02:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.687 11:02:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:06.687 11:02:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:06.687 11:02:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:06.687 11:02:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.687 11:02:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:06.687 11:02:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.687 11:02:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:06.687 11:02:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:06.687 11:02:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:06.687 11:02:23 -- common/autotest_common.sh@10 -- # set +x 00:32:08.586 11:02:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:08.586 11:02:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:08.586 11:02:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:08.586 11:02:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:08.586 11:02:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:08.586 11:02:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:08.586 11:02:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:08.586 11:02:25 -- nvmf/common.sh@294 -- # net_devs=() 00:32:08.586 11:02:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:08.586 11:02:25 -- nvmf/common.sh@295 -- # e810=() 00:32:08.586 11:02:25 -- nvmf/common.sh@295 -- # local -ga e810 00:32:08.586 11:02:25 -- nvmf/common.sh@296 -- # x722=() 00:32:08.586 11:02:25 -- nvmf/common.sh@296 -- # local -ga x722 00:32:08.587 11:02:25 -- nvmf/common.sh@297 -- # mlx=() 00:32:08.587 11:02:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:08.587 11:02:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.587 11:02:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.587 11:02:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.587 11:02:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.587 11:02:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.587 11:02:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.587 11:02:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.587 11:02:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.587 11:02:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.587 11:02:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.587 11:02:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.587 11:02:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:08.587 11:02:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:08.587 11:02:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:08.587 11:02:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:08.587 11:02:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:08.587 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:08.587 11:02:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:08.587 11:02:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:08.587 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:08.587 11:02:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:08.587 11:02:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:08.587 11:02:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.587 11:02:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:08.587 11:02:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.587 11:02:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:08.587 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:08.587 11:02:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.587 11:02:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:08.587 11:02:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.587 11:02:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:08.587 11:02:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.587 11:02:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:08.587 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:08.587 11:02:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.587 11:02:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:08.587 11:02:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:08.587 11:02:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:08.587 11:02:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:08.587 11:02:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.587 11:02:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.587 11:02:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.587 11:02:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:08.587 11:02:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.587 11:02:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.587 11:02:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:08.587 11:02:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.587 11:02:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.587 11:02:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:08.587 11:02:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:08.587 11:02:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.587 11:02:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.844 11:02:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.844 11:02:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.844 11:02:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:08.844 11:02:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.844 11:02:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.844 11:02:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.844 11:02:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:08.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:32:08.844 00:32:08.844 --- 10.0.0.2 ping statistics --- 00:32:08.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.844 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:32:08.844 11:02:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:32:08.844 00:32:08.844 --- 10.0.0.1 ping statistics --- 00:32:08.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.844 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:32:08.844 11:02:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.845 11:02:25 -- nvmf/common.sh@410 -- # return 0 00:32:08.845 11:02:25 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:08.845 11:02:25 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:09.777 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:09.777 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:09.777 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:10.035 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:10.035 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:10.035 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:10.035 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:10.035 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:10.035 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:10.035 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:10.035 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:10.035 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:10.035 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:10.035 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:10.035 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:10.035 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:10.971 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:10.971 11:02:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.971 11:02:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:10.971 11:02:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:10.971 11:02:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.971 11:02:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:10.971 11:02:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:10.971 11:02:27 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:32:10.971 11:02:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:10.971 11:02:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:10.971 11:02:27 -- common/autotest_common.sh@10 -- # set +x 00:32:10.971 11:02:27 -- nvmf/common.sh@469 -- # nvmfpid=3608192 00:32:10.971 11:02:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:10.971 11:02:27 -- nvmf/common.sh@470 -- # waitforlisten 3608192 00:32:10.971 11:02:27 -- common/autotest_common.sh@819 -- # '[' -z 3608192 ']' 00:32:10.971 11:02:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.971 11:02:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:10.971 11:02:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.971 11:02:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:10.971 11:02:27 -- common/autotest_common.sh@10 -- # set +x 00:32:11.230 [2024-07-10 11:02:27.815290] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:32:11.230 [2024-07-10 11:02:27.815375] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.230 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.230 [2024-07-10 11:02:27.883455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:11.230 [2024-07-10 11:02:27.976154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:11.230 [2024-07-10 11:02:27.976307] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:11.230 [2024-07-10 11:02:27.976324] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:11.230 [2024-07-10 11:02:27.976336] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:11.230 [2024-07-10 11:02:27.976387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.230 [2024-07-10 11:02:27.976414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:11.230 [2024-07-10 11:02:27.976470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:11.230 [2024-07-10 11:02:27.976474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.163 11:02:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:12.163 11:02:28 -- common/autotest_common.sh@852 -- # return 0 00:32:12.163 11:02:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:12.163 11:02:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:12.163 11:02:28 -- common/autotest_common.sh@10 -- # set +x 00:32:12.163 11:02:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.163 11:02:28 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:12.163 11:02:28 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:32:12.163 11:02:28 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:32:12.163 11:02:28 -- scripts/common.sh@311 -- # local bdf bdfs 00:32:12.163 11:02:28 -- scripts/common.sh@312 -- # local nvmes 00:32:12.163 11:02:28 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:32:12.163 11:02:28 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:12.163 11:02:28 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:12.163 11:02:28 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:12.163 11:02:28 -- scripts/common.sh@322 -- # uname -s 00:32:12.163 11:02:28 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:32:12.163 11:02:28 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:32:12.163 11:02:28 -- scripts/common.sh@327 -- # (( 1 )) 00:32:12.163 11:02:28 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:32:12.163 11:02:28 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:32:12.163 11:02:28 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:32:12.163 11:02:28 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:32:12.163 11:02:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:12.163 11:02:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:12.163 11:02:28 -- common/autotest_common.sh@10 -- # set +x 00:32:12.163 ************************************ 00:32:12.163 START TEST spdk_target_abort 00:32:12.163 ************************************ 00:32:12.163 11:02:28 -- common/autotest_common.sh@1104 -- # spdk_target 00:32:12.163 11:02:28 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:12.163 11:02:28 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:12.163 11:02:28 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:12.163 11:02:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:12.163 11:02:28 -- common/autotest_common.sh@10 -- # set +x 00:32:15.444 spdk_targetn1 00:32:15.444 11:02:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:15.444 11:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.444 11:02:31 -- common/autotest_common.sh@10 -- # set +x 00:32:15.444 [2024-07-10 11:02:31.660089] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.444 11:02:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:32:15.444 11:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.444 11:02:31 -- common/autotest_common.sh@10 -- # set +x 00:32:15.444 11:02:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:32:15.444 11:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.444 11:02:31 -- common/autotest_common.sh@10 -- # set +x 00:32:15.444 11:02:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:32:15.444 11:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.444 11:02:31 -- common/autotest_common.sh@10 -- # set +x 00:32:15.444 [2024-07-10 11:02:31.692335] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.444 11:02:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:15.444 11:02:31 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:15.444 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.735 Initializing NVMe Controllers 00:32:18.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:18.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:18.735 Initialization complete. Launching workers. 00:32:18.735 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10732, failed: 0 00:32:18.735 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1228, failed to submit 9504 00:32:18.735 success 779, unsuccess 449, failed 0 00:32:18.735 11:02:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:18.735 11:02:34 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:18.735 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.006 Initializing NVMe Controllers 00:32:22.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:22.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:22.006 Initialization complete. Launching workers. 00:32:22.006 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8608, failed: 0 00:32:22.006 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1222, failed to submit 7386 00:32:22.006 success 361, unsuccess 861, failed 0 00:32:22.006 11:02:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:22.006 11:02:38 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:22.006 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.287 Initializing NVMe Controllers 00:32:25.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:25.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:25.287 Initialization complete. Launching workers. 00:32:25.287 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31727, failed: 0 00:32:25.287 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2606, failed to submit 29121 00:32:25.287 success 547, unsuccess 2059, failed 0 00:32:25.287 11:02:41 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:25.287 11:02:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.287 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:32:25.287 11:02:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.287 11:02:41 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:25.287 11:02:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.287 11:02:41 -- common/autotest_common.sh@10 -- # set +x 00:32:26.221 11:02:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.221 11:02:42 -- target/abort_qd_sizes.sh@62 -- # killprocess 3608192 00:32:26.221 11:02:42 -- common/autotest_common.sh@926 -- # '[' -z 3608192 ']' 00:32:26.221 11:02:42 -- common/autotest_common.sh@930 -- # kill -0 3608192 00:32:26.221 11:02:42 -- common/autotest_common.sh@931 -- # uname 00:32:26.221 11:02:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:26.221 11:02:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3608192 00:32:26.221 11:02:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:26.221 11:02:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:26.221 11:02:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3608192' 00:32:26.221 killing process with pid 3608192 00:32:26.221 11:02:42 -- common/autotest_common.sh@945 -- # kill 3608192 00:32:26.221 11:02:42 -- common/autotest_common.sh@950 -- # wait 3608192 00:32:26.479 00:32:26.479 real 0m14.368s 00:32:26.479 user 0m56.998s 00:32:26.479 sys 0m2.623s 00:32:26.479 11:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:26.479 11:02:43 -- common/autotest_common.sh@10 -- # set +x 00:32:26.479 ************************************ 00:32:26.479 END TEST spdk_target_abort 00:32:26.479 ************************************ 00:32:26.479 11:02:43 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:26.479 11:02:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:26.479 11:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:26.479 11:02:43 -- common/autotest_common.sh@10 -- # set +x 00:32:26.479 ************************************ 00:32:26.479 START TEST kernel_target_abort 00:32:26.479 ************************************ 00:32:26.479 11:02:43 -- common/autotest_common.sh@1104 -- # kernel_target 00:32:26.479 11:02:43 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:26.479 11:02:43 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:26.479 11:02:43 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:26.479 11:02:43 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:26.479 11:02:43 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:26.479 11:02:43 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:26.479 11:02:43 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:26.479 11:02:43 -- nvmf/common.sh@627 -- # local block nvme 00:32:26.479 11:02:43 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:26.479 11:02:43 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:26.479 11:02:43 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:26.479 11:02:43 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:27.852 Waiting for block devices as requested 00:32:27.852 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:27.852 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:27.852 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:28.111 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:28.111 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:28.111 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:28.111 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:28.370 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:28.370 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:28.370 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:28.370 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:28.629 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:28.629 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:28.629 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:28.886 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:28.886 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:28.886 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:28.886 11:02:45 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:28.886 11:02:45 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:28.886 11:02:45 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:28.886 11:02:45 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:28.886 11:02:45 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:29.144 No valid GPT data, bailing 00:32:29.144 11:02:45 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:29.144 11:02:45 -- scripts/common.sh@393 -- # pt= 00:32:29.144 11:02:45 -- scripts/common.sh@394 -- # return 1 00:32:29.144 11:02:45 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:29.144 11:02:45 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:32:29.144 11:02:45 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:29.144 11:02:45 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:29.144 11:02:45 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:29.144 11:02:45 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:29.144 11:02:45 -- nvmf/common.sh@654 -- # echo 1 00:32:29.144 11:02:45 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:32:29.144 11:02:45 -- nvmf/common.sh@656 -- # echo 1 00:32:29.144 11:02:45 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:29.144 11:02:45 -- nvmf/common.sh@663 -- # echo tcp 00:32:29.144 11:02:45 -- nvmf/common.sh@664 -- # echo 4420 00:32:29.144 11:02:45 -- nvmf/common.sh@665 -- # echo ipv4 00:32:29.144 11:02:45 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:29.144 11:02:45 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:29.144 00:32:29.144 Discovery Log Number of Records 2, Generation counter 2 00:32:29.144 =====Discovery Log Entry 0====== 00:32:29.144 trtype: tcp 00:32:29.144 adrfam: ipv4 00:32:29.144 subtype: current discovery subsystem 00:32:29.144 treq: not specified, sq flow control disable supported 00:32:29.144 portid: 1 00:32:29.144 trsvcid: 4420 00:32:29.144 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:29.144 traddr: 10.0.0.1 00:32:29.144 eflags: none 00:32:29.144 sectype: none 00:32:29.144 =====Discovery Log Entry 1====== 00:32:29.144 trtype: tcp 00:32:29.144 adrfam: ipv4 00:32:29.144 subtype: nvme subsystem 00:32:29.144 treq: not specified, sq flow control disable supported 00:32:29.144 portid: 1 00:32:29.144 trsvcid: 4420 00:32:29.144 subnqn: kernel_target 00:32:29.144 traddr: 10.0.0.1 00:32:29.144 eflags: none 00:32:29.144 sectype: none 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:29.145 11:02:45 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:29.145 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.423 Initializing NVMe Controllers 00:32:32.423 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:32.423 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:32.423 Initialization complete. Launching workers. 00:32:32.423 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31822, failed: 0 00:32:32.423 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31822, failed to submit 0 00:32:32.423 success 0, unsuccess 31822, failed 0 00:32:32.423 11:02:48 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:32.423 11:02:48 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:32.423 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.771 Initializing NVMe Controllers 00:32:35.772 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:35.772 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:35.772 Initialization complete. Launching workers. 00:32:35.772 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 65972, failed: 0 00:32:35.772 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 16638, failed to submit 49334 00:32:35.772 success 0, unsuccess 16638, failed 0 00:32:35.772 11:02:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:35.772 11:02:52 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:35.772 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.052 Initializing NVMe Controllers 00:32:39.052 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:39.052 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:39.052 Initialization complete. Launching workers. 00:32:39.052 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 62929, failed: 0 00:32:39.052 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 15730, failed to submit 47199 00:32:39.052 success 0, unsuccess 15730, failed 0 00:32:39.052 11:02:55 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:39.052 11:02:55 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:39.052 11:02:55 -- nvmf/common.sh@677 -- # echo 0 00:32:39.052 11:02:55 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:39.052 11:02:55 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:39.052 11:02:55 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:39.052 11:02:55 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:39.052 11:02:55 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:39.052 11:02:55 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:39.052 00:32:39.052 real 0m12.034s 00:32:39.052 user 0m4.621s 00:32:39.052 sys 0m2.493s 00:32:39.052 11:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.052 11:02:55 -- common/autotest_common.sh@10 -- # set +x 00:32:39.052 ************************************ 00:32:39.052 END TEST kernel_target_abort 00:32:39.052 ************************************ 00:32:39.052 11:02:55 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:39.052 11:02:55 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:39.052 11:02:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:39.052 11:02:55 -- nvmf/common.sh@116 -- # sync 00:32:39.052 11:02:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:39.052 11:02:55 -- nvmf/common.sh@119 -- # set +e 00:32:39.052 11:02:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:39.052 11:02:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:39.052 rmmod nvme_tcp 00:32:39.052 rmmod nvme_fabrics 00:32:39.052 rmmod nvme_keyring 00:32:39.052 11:02:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:39.052 11:02:55 -- nvmf/common.sh@123 -- # set -e 00:32:39.052 11:02:55 -- nvmf/common.sh@124 -- # return 0 00:32:39.052 11:02:55 -- nvmf/common.sh@477 -- # '[' -n 3608192 ']' 00:32:39.052 11:02:55 -- nvmf/common.sh@478 -- # killprocess 3608192 00:32:39.052 11:02:55 -- common/autotest_common.sh@926 -- # '[' -z 3608192 ']' 00:32:39.052 11:02:55 -- common/autotest_common.sh@930 -- # kill -0 3608192 00:32:39.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3608192) - No such process 00:32:39.052 11:02:55 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3608192 is not found' 00:32:39.052 Process with pid 3608192 is not found 00:32:39.052 11:02:55 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:39.052 11:02:55 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:39.618 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:39.618 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:39.618 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:39.618 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:39.618 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:39.618 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:39.618 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:39.877 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:39.877 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:39.877 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:39.877 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:39.877 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:39.877 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:39.877 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:39.877 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:39.877 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:39.877 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:39.877 11:02:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:39.877 11:02:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:39.877 11:02:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:39.877 11:02:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:39.877 11:02:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.877 11:02:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:39.877 11:02:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.410 11:02:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:42.410 00:32:42.410 real 0m35.247s 00:32:42.410 user 1m3.853s 00:32:42.410 sys 0m8.340s 00:32:42.410 11:02:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:42.410 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:32:42.410 ************************************ 00:32:42.410 END TEST nvmf_abort_qd_sizes 00:32:42.410 ************************************ 00:32:42.410 11:02:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:42.410 11:02:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:42.410 11:02:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:42.410 11:02:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:42.410 11:02:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:42.410 11:02:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:42.410 11:02:58 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:42.410 11:02:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:42.410 11:02:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:42.410 11:02:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:42.410 11:02:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:42.410 11:02:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:42.410 11:02:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:42.410 11:02:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:42.410 11:02:58 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:32:42.410 11:02:58 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:42.410 11:02:58 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:42.410 11:02:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:42.410 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:32:42.410 11:02:58 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:42.410 11:02:58 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:42.410 11:02:58 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:42.410 11:02:58 -- common/autotest_common.sh@10 -- # set +x 00:32:43.784 INFO: APP EXITING 00:32:43.784 INFO: killing all VMs 00:32:43.784 INFO: killing vhost app 00:32:43.784 INFO: EXIT DONE 00:32:45.159 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:45.159 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:45.159 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:45.159 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:45.159 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:45.159 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:45.159 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:45.159 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:45.159 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:45.159 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:45.159 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:45.159 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:45.159 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:45.159 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:45.159 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:45.159 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:45.159 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:46.533 Cleaning 00:32:46.533 Removing: /var/run/dpdk/spdk0/config 00:32:46.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:46.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:46.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:46.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:46.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:46.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:46.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:46.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:46.533 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:46.533 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:46.533 Removing: /var/run/dpdk/spdk1/config 00:32:46.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:46.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:46.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:46.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:46.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:46.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:46.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:46.533 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:46.533 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:46.533 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:46.533 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:46.533 Removing: /var/run/dpdk/spdk2/config 00:32:46.533 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:46.533 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:46.533 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:46.533 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:46.533 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:46.533 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:46.533 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:46.533 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:46.533 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:46.534 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:46.534 Removing: /var/run/dpdk/spdk3/config 00:32:46.534 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:46.534 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:46.534 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:46.534 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:46.534 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:46.534 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:46.534 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:46.534 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:46.534 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:46.534 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:46.534 Removing: /var/run/dpdk/spdk4/config 00:32:46.534 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:46.534 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:46.534 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:46.534 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:46.534 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:46.534 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:46.534 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:46.534 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:46.534 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:46.534 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:46.534 Removing: /dev/shm/bdev_svc_trace.1 00:32:46.534 Removing: /dev/shm/nvmf_trace.0 00:32:46.534 Removing: /dev/shm/spdk_tgt_trace.pid3332982 00:32:46.534 Removing: /var/run/dpdk/spdk0 00:32:46.534 Removing: /var/run/dpdk/spdk1 00:32:46.534 Removing: /var/run/dpdk/spdk2 00:32:46.534 Removing: /var/run/dpdk/spdk3 00:32:46.534 Removing: /var/run/dpdk/spdk4 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3331216 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3331967 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3332982 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3333514 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3335239 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3336179 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3336369 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3336688 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3337020 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3337214 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3337374 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3337534 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3337741 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3338301 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3340710 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3340883 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3341182 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3341322 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3341628 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3341770 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3342082 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3342222 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3342520 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3342660 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3342824 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3342967 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3343334 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3343497 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3343692 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3343864 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3344013 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3344073 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3344228 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3344498 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3344639 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3344803 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3344943 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3345225 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3345364 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3345521 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3345668 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3345947 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3346092 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3346247 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3346390 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3346674 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3346813 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3346974 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3347117 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3347382 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3347540 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3347695 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3347841 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3348078 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3348261 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3348422 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3348564 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3348801 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3348989 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3349143 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3349290 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3349540 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3349721 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3349875 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3350018 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3350304 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3350451 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3350614 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3350767 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3351037 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3351181 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3351336 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3351524 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3351729 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3353914 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3409279 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3411941 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3418928 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3422384 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3425397 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3425813 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3429699 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3429707 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3430380 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3431035 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3431609 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3432023 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3432061 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3432289 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3432427 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3432432 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3433021 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3433668 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3434342 00:32:46.534 Removing: /var/run/dpdk/spdk_pid3434755 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3434766 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3434962 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3436068 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3436822 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3442425 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3442703 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3445358 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3449145 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3451382 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3458501 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3463904 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3465245 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3465933 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3476326 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3478673 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3481378 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3482591 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3484086 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3484238 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3484386 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3484662 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3485243 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3486617 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3487547 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3488072 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3492190 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3495637 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3499289 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3523561 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3526289 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3530386 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3531367 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3532494 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3535198 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3537592 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3541969 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3541972 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3544912 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3545048 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3545192 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3545466 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3545589 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3546576 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3547912 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3549129 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3550344 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3552182 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3553397 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3557282 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3557655 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3558982 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3559714 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3563478 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3565520 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3569123 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3572748 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3576409 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3576832 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3577254 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3577670 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3578261 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3578830 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3579261 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3579810 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3582590 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3583039 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3587106 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3587353 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3589038 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3594191 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3594200 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3597137 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3598573 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3600010 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3600806 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3602344 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3603116 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3608630 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3609034 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3609434 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3611044 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3611330 00:32:46.792 Removing: /var/run/dpdk/spdk_pid3611742 00:32:46.792 Clean 00:32:47.050 killing process with pid 3302661 00:32:55.161 killing process with pid 3302658 00:32:55.161 killing process with pid 3302660 00:32:55.161 killing process with pid 3302659 00:32:55.161 11:03:11 -- common/autotest_common.sh@1436 -- # return 0 00:32:55.161 11:03:11 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:55.161 11:03:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:55.161 11:03:11 -- common/autotest_common.sh@10 -- # set +x 00:32:55.161 11:03:11 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:55.161 11:03:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:55.161 11:03:11 -- common/autotest_common.sh@10 -- # set +x 00:32:55.161 11:03:11 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:55.161 11:03:11 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:55.161 11:03:11 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:55.161 11:03:11 -- spdk/autotest.sh@394 -- # hash lcov 00:32:55.161 11:03:11 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:55.161 11:03:11 -- spdk/autotest.sh@396 -- # hostname 00:32:55.161 11:03:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:55.420 geninfo: WARNING: invalid characters removed from testname! 00:33:21.957 11:03:38 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:26.149 11:03:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:28.683 11:03:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:31.218 11:03:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:34.504 11:03:50 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:37.038 11:03:53 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:39.572 11:03:56 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:39.572 11:03:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.572 11:03:56 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:39.572 11:03:56 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.572 11:03:56 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.572 11:03:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.573 11:03:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.573 11:03:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.573 11:03:56 -- paths/export.sh@5 -- $ export PATH 00:33:39.573 11:03:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.573 11:03:56 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:39.573 11:03:56 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:39.573 11:03:56 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720602236.XXXXXX 00:33:39.573 11:03:56 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720602236.4tmLSf 00:33:39.573 11:03:56 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:39.573 11:03:56 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:33:39.573 11:03:56 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:33:39.573 11:03:56 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:33:39.573 11:03:56 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:39.573 11:03:56 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:39.573 11:03:56 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:39.573 11:03:56 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:39.573 11:03:56 -- common/autotest_common.sh@10 -- $ set +x 00:33:39.573 11:03:56 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:33:39.573 11:03:56 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:33:39.573 11:03:56 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:39.573 11:03:56 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:39.573 11:03:56 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:39.573 11:03:56 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:39.573 11:03:56 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:39.573 11:03:56 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:39.573 11:03:56 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:39.573 11:03:56 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:39.573 11:03:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:39.573 + [[ -n 3247875 ]] 00:33:39.573 + sudo kill 3247875 00:33:39.582 [Pipeline] } 00:33:39.605 [Pipeline] // stage 00:33:39.611 [Pipeline] } 00:33:39.632 [Pipeline] // timeout 00:33:39.638 [Pipeline] } 00:33:39.657 [Pipeline] // catchError 00:33:39.662 [Pipeline] } 00:33:39.684 [Pipeline] // wrap 00:33:39.691 [Pipeline] } 00:33:39.711 [Pipeline] // catchError 00:33:39.721 [Pipeline] stage 00:33:39.724 [Pipeline] { (Epilogue) 00:33:39.743 [Pipeline] catchError 00:33:39.745 [Pipeline] { 00:33:39.763 [Pipeline] echo 00:33:39.765 Cleanup processes 00:33:39.772 [Pipeline] sh 00:33:40.055 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:40.055 3624515 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:40.072 [Pipeline] sh 00:33:40.355 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:40.355 ++ awk '{print $1}' 00:33:40.355 ++ grep -v 'sudo pgrep' 00:33:40.355 + sudo kill -9 00:33:40.355 + true 00:33:40.369 [Pipeline] sh 00:33:40.668 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:50.710 [Pipeline] sh 00:33:50.993 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:50.993 Artifacts sizes are good 00:33:51.011 [Pipeline] archiveArtifacts 00:33:51.020 Archiving artifacts 00:33:51.231 [Pipeline] sh 00:33:51.516 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:51.531 [Pipeline] cleanWs 00:33:51.541 [WS-CLEANUP] Deleting project workspace... 00:33:51.541 [WS-CLEANUP] Deferred wipeout is used... 00:33:51.548 [WS-CLEANUP] done 00:33:51.550 [Pipeline] } 00:33:51.575 [Pipeline] // catchError 00:33:51.596 [Pipeline] sh 00:33:51.880 + logger -p user.info -t JENKINS-CI 00:33:51.890 [Pipeline] } 00:33:51.910 [Pipeline] // stage 00:33:51.918 [Pipeline] } 00:33:51.941 [Pipeline] // node 00:33:51.947 [Pipeline] End of Pipeline 00:33:51.989 Finished: SUCCESS